00:00:00.000 Started by upstream project "autotest-nightly" build number 4348 00:00:00.000 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3711 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.012 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.013 The recommended git tool is: git 00:00:00.013 using credential 00000000-0000-0000-0000-000000000002 00:00:00.016 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.030 Fetching changes from the remote Git repository 00:00:00.036 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.053 Using shallow fetch with depth 1 00:00:00.053 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.053 > git --version # timeout=10 00:00:00.066 > git --version # 'git version 2.39.2' 00:00:00.066 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.085 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.085 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:03.047 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:03.058 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:03.069 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:03.069 > git config core.sparsecheckout # timeout=10 00:00:03.079 > git read-tree -mu HEAD # timeout=10 00:00:03.093 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:03.115 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:03.115 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:03.235 [Pipeline] Start of Pipeline 00:00:03.249 [Pipeline] library 00:00:03.251 Loading library shm_lib@master 00:00:03.251 Library shm_lib@master is cached. Copying from home. 00:00:03.269 [Pipeline] node 00:00:03.298 Running on VM-host-WFP7 in /var/jenkins/workspace/raid-vg-autotest 00:00:03.313 [Pipeline] { 00:00:03.346 [Pipeline] catchError 00:00:03.347 [Pipeline] { 00:00:03.356 [Pipeline] wrap 00:00:03.361 [Pipeline] { 00:00:03.367 [Pipeline] stage 00:00:03.368 [Pipeline] { (Prologue) 00:00:03.381 [Pipeline] echo 00:00:03.382 Node: VM-host-WFP7 00:00:03.386 [Pipeline] cleanWs 00:00:03.396 [WS-CLEANUP] Deleting project workspace... 00:00:03.396 [WS-CLEANUP] Deferred wipeout is used... 00:00:03.402 [WS-CLEANUP] done 00:00:03.544 [Pipeline] setCustomBuildProperty 00:00:03.616 [Pipeline] httpRequest 00:00:04.006 [Pipeline] echo 00:00:04.008 Sorcerer 10.211.164.101 is alive 00:00:04.016 [Pipeline] retry 00:00:04.018 [Pipeline] { 00:00:04.029 [Pipeline] httpRequest 00:00:04.034 HttpMethod: GET 00:00:04.034 URL: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:04.034 Sending request to url: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:04.036 Response Code: HTTP/1.1 200 OK 00:00:04.037 Success: Status code 200 is in the accepted range: 200,404 00:00:04.037 Saving response body to /var/jenkins/workspace/raid-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:04.182 [Pipeline] } 00:00:04.192 [Pipeline] // retry 00:00:04.196 [Pipeline] sh 00:00:04.494 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:04.512 [Pipeline] httpRequest 00:00:05.925 [Pipeline] echo 00:00:05.927 Sorcerer 10.211.164.101 is alive 00:00:05.936 [Pipeline] retry 00:00:05.937 [Pipeline] { 00:00:05.949 [Pipeline] httpRequest 00:00:05.953 HttpMethod: GET 00:00:05.953 URL: http://10.211.164.101/packages/spdk_a2f5e1c2d535934bced849d8b079523bc74c98f1.tar.gz 00:00:05.954 Sending request to url: http://10.211.164.101/packages/spdk_a2f5e1c2d535934bced849d8b079523bc74c98f1.tar.gz 00:00:05.956 Response Code: HTTP/1.1 200 OK 00:00:05.956 Success: Status code 200 is in the accepted range: 200,404 00:00:05.956 Saving response body to /var/jenkins/workspace/raid-vg-autotest/spdk_a2f5e1c2d535934bced849d8b079523bc74c98f1.tar.gz 00:00:21.317 [Pipeline] } 00:00:21.339 [Pipeline] // retry 00:00:21.347 [Pipeline] sh 00:00:21.633 + tar --no-same-owner -xf spdk_a2f5e1c2d535934bced849d8b079523bc74c98f1.tar.gz 00:00:24.181 [Pipeline] sh 00:00:24.465 + git -C spdk log --oneline -n5 00:00:24.466 a2f5e1c2d blob: don't free bs when spdk_bs_destroy/spdk_bs_unload fails 00:00:24.466 0f59982b6 blob: don't use bs_load_ctx_fail in bs_write_used_* functions 00:00:24.466 0354bb8e8 nvme/rdma: Force qp disconnect on pg remove 00:00:24.466 0ea9ac02f accel/mlx5: Create pool of UMRs 00:00:24.466 60adca7e1 lib/mlx5: API to configure UMR 00:00:24.484 [Pipeline] writeFile 00:00:24.500 [Pipeline] sh 00:00:24.788 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:24.802 [Pipeline] sh 00:00:25.088 + cat autorun-spdk.conf 00:00:25.088 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:25.088 SPDK_RUN_ASAN=1 00:00:25.088 SPDK_RUN_UBSAN=1 00:00:25.088 SPDK_TEST_RAID=1 00:00:25.088 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:25.101 RUN_NIGHTLY=1 00:00:25.103 [Pipeline] } 00:00:25.120 [Pipeline] // stage 00:00:25.138 [Pipeline] stage 00:00:25.141 [Pipeline] { (Run VM) 00:00:25.156 [Pipeline] sh 00:00:25.447 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:00:25.447 + echo 'Start stage prepare_nvme.sh' 00:00:25.447 Start stage prepare_nvme.sh 00:00:25.447 + [[ -n 6 ]] 00:00:25.447 + disk_prefix=ex6 00:00:25.447 + [[ -n /var/jenkins/workspace/raid-vg-autotest ]] 00:00:25.447 + [[ -e /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf ]] 00:00:25.447 + source /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf 00:00:25.447 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:25.447 ++ SPDK_RUN_ASAN=1 00:00:25.447 ++ SPDK_RUN_UBSAN=1 00:00:25.447 ++ SPDK_TEST_RAID=1 00:00:25.447 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:25.447 ++ RUN_NIGHTLY=1 00:00:25.447 + cd /var/jenkins/workspace/raid-vg-autotest 00:00:25.447 + nvme_files=() 00:00:25.447 + declare -A nvme_files 00:00:25.447 + backend_dir=/var/lib/libvirt/images/backends 00:00:25.447 + nvme_files['nvme.img']=5G 00:00:25.447 + nvme_files['nvme-cmb.img']=5G 00:00:25.447 + nvme_files['nvme-multi0.img']=4G 00:00:25.447 + nvme_files['nvme-multi1.img']=4G 00:00:25.447 + nvme_files['nvme-multi2.img']=4G 00:00:25.447 + nvme_files['nvme-openstack.img']=8G 00:00:25.447 + nvme_files['nvme-zns.img']=5G 00:00:25.447 + (( SPDK_TEST_NVME_PMR == 1 )) 00:00:25.447 + (( SPDK_TEST_FTL == 1 )) 00:00:25.447 + (( SPDK_TEST_NVME_FDP == 1 )) 00:00:25.447 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:00:25.447 + for nvme in "${!nvme_files[@]}" 00:00:25.447 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi2.img -s 4G 00:00:25.447 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:00:25.447 + for nvme in "${!nvme_files[@]}" 00:00:25.447 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-cmb.img -s 5G 00:00:25.447 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:00:25.447 + for nvme in "${!nvme_files[@]}" 00:00:25.447 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-openstack.img -s 8G 00:00:25.447 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:00:25.447 + for nvme in "${!nvme_files[@]}" 00:00:25.447 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-zns.img -s 5G 00:00:25.447 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:00:25.447 + for nvme in "${!nvme_files[@]}" 00:00:25.447 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi1.img -s 4G 00:00:25.447 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:00:25.447 + for nvme in "${!nvme_files[@]}" 00:00:25.447 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi0.img -s 4G 00:00:25.447 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:00:25.447 + for nvme in "${!nvme_files[@]}" 00:00:25.447 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme.img -s 5G 00:00:25.707 Formatting '/var/lib/libvirt/images/backends/ex6-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:00:25.707 ++ sudo grep -rl ex6-nvme.img /etc/libvirt/qemu 00:00:25.707 + echo 'End stage prepare_nvme.sh' 00:00:25.707 End stage prepare_nvme.sh 00:00:25.720 [Pipeline] sh 00:00:26.005 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:00:26.005 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 -b /var/lib/libvirt/images/backends/ex6-nvme.img -b /var/lib/libvirt/images/backends/ex6-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex6-nvme-multi1.img:/var/lib/libvirt/images/backends/ex6-nvme-multi2.img -H -a -v -f fedora39 00:00:26.005 00:00:26.005 DIR=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant 00:00:26.005 SPDK_DIR=/var/jenkins/workspace/raid-vg-autotest/spdk 00:00:26.005 VAGRANT_TARGET=/var/jenkins/workspace/raid-vg-autotest 00:00:26.005 HELP=0 00:00:26.005 DRY_RUN=0 00:00:26.005 NVME_FILE=/var/lib/libvirt/images/backends/ex6-nvme.img,/var/lib/libvirt/images/backends/ex6-nvme-multi0.img, 00:00:26.005 NVME_DISKS_TYPE=nvme,nvme, 00:00:26.005 NVME_AUTO_CREATE=0 00:00:26.005 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex6-nvme-multi1.img:/var/lib/libvirt/images/backends/ex6-nvme-multi2.img, 00:00:26.005 NVME_CMB=,, 00:00:26.005 NVME_PMR=,, 00:00:26.005 NVME_ZNS=,, 00:00:26.005 NVME_MS=,, 00:00:26.005 NVME_FDP=,, 00:00:26.005 SPDK_VAGRANT_DISTRO=fedora39 00:00:26.005 SPDK_VAGRANT_VMCPU=10 00:00:26.005 SPDK_VAGRANT_VMRAM=12288 00:00:26.005 SPDK_VAGRANT_PROVIDER=libvirt 00:00:26.005 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:00:26.005 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:00:26.005 SPDK_OPENSTACK_NETWORK=0 00:00:26.005 VAGRANT_PACKAGE_BOX=0 00:00:26.005 VAGRANTFILE=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:00:26.005 FORCE_DISTRO=true 00:00:26.005 VAGRANT_BOX_VERSION= 00:00:26.005 EXTRA_VAGRANTFILES= 00:00:26.005 NIC_MODEL=virtio 00:00:26.005 00:00:26.005 mkdir: created directory '/var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt' 00:00:26.005 /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt /var/jenkins/workspace/raid-vg-autotest 00:00:27.933 Bringing machine 'default' up with 'libvirt' provider... 00:00:28.193 ==> default: Creating image (snapshot of base box volume). 00:00:28.453 ==> default: Creating domain with the following settings... 00:00:28.453 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1733687760_9a005e348d77e4b8a97f 00:00:28.453 ==> default: -- Domain type: kvm 00:00:28.453 ==> default: -- Cpus: 10 00:00:28.453 ==> default: -- Feature: acpi 00:00:28.453 ==> default: -- Feature: apic 00:00:28.453 ==> default: -- Feature: pae 00:00:28.453 ==> default: -- Memory: 12288M 00:00:28.453 ==> default: -- Memory Backing: hugepages: 00:00:28.453 ==> default: -- Management MAC: 00:00:28.453 ==> default: -- Loader: 00:00:28.453 ==> default: -- Nvram: 00:00:28.453 ==> default: -- Base box: spdk/fedora39 00:00:28.453 ==> default: -- Storage pool: default 00:00:28.453 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1733687760_9a005e348d77e4b8a97f.img (20G) 00:00:28.453 ==> default: -- Volume Cache: default 00:00:28.453 ==> default: -- Kernel: 00:00:28.453 ==> default: -- Initrd: 00:00:28.453 ==> default: -- Graphics Type: vnc 00:00:28.453 ==> default: -- Graphics Port: -1 00:00:28.453 ==> default: -- Graphics IP: 127.0.0.1 00:00:28.453 ==> default: -- Graphics Password: Not defined 00:00:28.453 ==> default: -- Video Type: cirrus 00:00:28.453 ==> default: -- Video VRAM: 9216 00:00:28.453 ==> default: -- Sound Type: 00:00:28.453 ==> default: -- Keymap: en-us 00:00:28.453 ==> default: -- TPM Path: 00:00:28.453 ==> default: -- INPUT: type=mouse, bus=ps2 00:00:28.453 ==> default: -- Command line args: 00:00:28.453 ==> default: -> value=-device, 00:00:28.453 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:00:28.453 ==> default: -> value=-drive, 00:00:28.453 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme.img,if=none,id=nvme-0-drive0, 00:00:28.453 ==> default: -> value=-device, 00:00:28.453 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:28.453 ==> default: -> value=-device, 00:00:28.453 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:00:28.453 ==> default: -> value=-drive, 00:00:28.453 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:00:28.453 ==> default: -> value=-device, 00:00:28.453 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:28.453 ==> default: -> value=-drive, 00:00:28.453 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:00:28.453 ==> default: -> value=-device, 00:00:28.453 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:28.453 ==> default: -> value=-drive, 00:00:28.453 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:00:28.453 ==> default: -> value=-device, 00:00:28.453 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:28.454 ==> default: Creating shared folders metadata... 00:00:28.454 ==> default: Starting domain. 00:00:30.362 ==> default: Waiting for domain to get an IP address... 00:00:45.250 ==> default: Waiting for SSH to become available... 00:00:46.194 ==> default: Configuring and enabling network interfaces... 00:00:52.774 default: SSH address: 192.168.121.2:22 00:00:52.774 default: SSH username: vagrant 00:00:52.774 default: SSH auth method: private key 00:00:56.117 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:04.294 ==> default: Mounting SSHFS shared folder... 00:01:06.201 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:01:06.201 ==> default: Checking Mount.. 00:01:08.110 ==> default: Folder Successfully Mounted! 00:01:08.111 ==> default: Running provisioner: file... 00:01:09.052 default: ~/.gitconfig => .gitconfig 00:01:09.623 00:01:09.623 SUCCESS! 00:01:09.623 00:01:09.623 cd to /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:01:09.623 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:09.623 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:01:09.623 00:01:09.634 [Pipeline] } 00:01:09.651 [Pipeline] // stage 00:01:09.661 [Pipeline] dir 00:01:09.662 Running in /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt 00:01:09.663 [Pipeline] { 00:01:09.679 [Pipeline] catchError 00:01:09.681 [Pipeline] { 00:01:09.695 [Pipeline] sh 00:01:09.980 + vagrant ssh-config --host vagrant 00:01:09.980 + sed -ne /^Host/,$p 00:01:09.980 + tee ssh_conf 00:01:12.523 Host vagrant 00:01:12.523 HostName 192.168.121.2 00:01:12.523 User vagrant 00:01:12.523 Port 22 00:01:12.523 UserKnownHostsFile /dev/null 00:01:12.523 StrictHostKeyChecking no 00:01:12.523 PasswordAuthentication no 00:01:12.523 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:01:12.523 IdentitiesOnly yes 00:01:12.523 LogLevel FATAL 00:01:12.523 ForwardAgent yes 00:01:12.523 ForwardX11 yes 00:01:12.523 00:01:12.538 [Pipeline] withEnv 00:01:12.541 [Pipeline] { 00:01:12.554 [Pipeline] sh 00:01:12.838 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:12.838 source /etc/os-release 00:01:12.838 [[ -e /image.version ]] && img=$(< /image.version) 00:01:12.838 # Minimal, systemd-like check. 00:01:12.838 if [[ -e /.dockerenv ]]; then 00:01:12.838 # Clear garbage from the node's name: 00:01:12.838 # agt-er_autotest_547-896 -> autotest_547-896 00:01:12.838 # $HOSTNAME is the actual container id 00:01:12.838 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:12.838 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:12.838 # We can assume this is a mount from a host where container is running, 00:01:12.838 # so fetch its hostname to easily identify the target swarm worker. 00:01:12.838 container="$(< /etc/hostname) ($agent)" 00:01:12.838 else 00:01:12.838 # Fallback 00:01:12.838 container=$agent 00:01:12.838 fi 00:01:12.838 fi 00:01:12.838 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:12.838 00:01:13.111 [Pipeline] } 00:01:13.128 [Pipeline] // withEnv 00:01:13.137 [Pipeline] setCustomBuildProperty 00:01:13.154 [Pipeline] stage 00:01:13.156 [Pipeline] { (Tests) 00:01:13.176 [Pipeline] sh 00:01:13.459 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:13.732 [Pipeline] sh 00:01:14.013 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:14.286 [Pipeline] timeout 00:01:14.286 Timeout set to expire in 1 hr 30 min 00:01:14.287 [Pipeline] { 00:01:14.298 [Pipeline] sh 00:01:14.577 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:15.143 HEAD is now at a2f5e1c2d blob: don't free bs when spdk_bs_destroy/spdk_bs_unload fails 00:01:15.154 [Pipeline] sh 00:01:15.472 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:15.761 [Pipeline] sh 00:01:16.055 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:16.330 [Pipeline] sh 00:01:16.611 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=raid-vg-autotest ./autoruner.sh spdk_repo 00:01:16.870 ++ readlink -f spdk_repo 00:01:16.870 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:16.870 + [[ -n /home/vagrant/spdk_repo ]] 00:01:16.870 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:16.870 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:16.870 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:16.870 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:16.870 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:16.870 + [[ raid-vg-autotest == pkgdep-* ]] 00:01:16.870 + cd /home/vagrant/spdk_repo 00:01:16.870 + source /etc/os-release 00:01:16.870 ++ NAME='Fedora Linux' 00:01:16.870 ++ VERSION='39 (Cloud Edition)' 00:01:16.870 ++ ID=fedora 00:01:16.870 ++ VERSION_ID=39 00:01:16.870 ++ VERSION_CODENAME= 00:01:16.870 ++ PLATFORM_ID=platform:f39 00:01:16.870 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:16.870 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:16.870 ++ LOGO=fedora-logo-icon 00:01:16.870 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:16.870 ++ HOME_URL=https://fedoraproject.org/ 00:01:16.870 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:16.870 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:16.870 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:16.870 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:16.870 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:16.870 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:16.870 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:16.870 ++ SUPPORT_END=2024-11-12 00:01:16.870 ++ VARIANT='Cloud Edition' 00:01:16.870 ++ VARIANT_ID=cloud 00:01:16.870 + uname -a 00:01:16.870 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:16.870 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:17.437 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:01:17.437 Hugepages 00:01:17.437 node hugesize free / total 00:01:17.437 node0 1048576kB 0 / 0 00:01:17.437 node0 2048kB 0 / 0 00:01:17.437 00:01:17.437 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:17.437 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:17.437 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:17.437 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:01:17.437 + rm -f /tmp/spdk-ld-path 00:01:17.437 + source autorun-spdk.conf 00:01:17.437 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:17.437 ++ SPDK_RUN_ASAN=1 00:01:17.437 ++ SPDK_RUN_UBSAN=1 00:01:17.437 ++ SPDK_TEST_RAID=1 00:01:17.437 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:17.437 ++ RUN_NIGHTLY=1 00:01:17.437 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:17.437 + [[ -n '' ]] 00:01:17.437 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:17.696 + for M in /var/spdk/build-*-manifest.txt 00:01:17.696 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:17.696 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:17.696 + for M in /var/spdk/build-*-manifest.txt 00:01:17.696 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:17.696 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:17.696 + for M in /var/spdk/build-*-manifest.txt 00:01:17.696 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:17.696 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:17.696 ++ uname 00:01:17.696 + [[ Linux == \L\i\n\u\x ]] 00:01:17.696 + sudo dmesg -T 00:01:17.696 + sudo dmesg --clear 00:01:17.696 + dmesg_pid=5426 00:01:17.696 + sudo dmesg -Tw 00:01:17.696 + [[ Fedora Linux == FreeBSD ]] 00:01:17.696 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:17.696 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:17.696 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:17.696 + [[ -x /usr/src/fio-static/fio ]] 00:01:17.696 + export FIO_BIN=/usr/src/fio-static/fio 00:01:17.696 + FIO_BIN=/usr/src/fio-static/fio 00:01:17.696 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:17.696 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:17.696 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:17.696 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:17.696 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:17.696 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:17.696 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:17.696 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:17.696 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:17.956 19:56:49 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:01:17.956 19:56:49 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:17.956 19:56:49 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:17.956 19:56:49 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_RUN_ASAN=1 00:01:17.956 19:56:49 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_RUN_UBSAN=1 00:01:17.956 19:56:49 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_RAID=1 00:01:17.956 19:56:49 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:17.956 19:56:49 -- spdk_repo/autorun-spdk.conf@6 -- $ RUN_NIGHTLY=1 00:01:17.956 19:56:49 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:17.956 19:56:49 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:17.956 19:56:49 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:01:17.956 19:56:49 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:17.956 19:56:49 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:17.956 19:56:49 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:17.956 19:56:49 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:17.956 19:56:49 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:17.956 19:56:49 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:17.956 19:56:49 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:17.956 19:56:49 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:17.956 19:56:49 -- paths/export.sh@5 -- $ export PATH 00:01:17.956 19:56:49 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:17.956 19:56:49 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:17.957 19:56:49 -- common/autobuild_common.sh@493 -- $ date +%s 00:01:17.957 19:56:49 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1733687809.XXXXXX 00:01:17.957 19:56:49 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1733687809.sQP0ZB 00:01:17.957 19:56:49 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:01:17.957 19:56:49 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:01:17.957 19:56:49 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:17.957 19:56:49 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:17.957 19:56:49 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:17.957 19:56:49 -- common/autobuild_common.sh@509 -- $ get_config_params 00:01:17.957 19:56:49 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:01:17.957 19:56:49 -- common/autotest_common.sh@10 -- $ set +x 00:01:17.957 19:56:49 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f' 00:01:17.957 19:56:49 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:01:17.957 19:56:49 -- pm/common@17 -- $ local monitor 00:01:17.957 19:56:49 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:17.957 19:56:49 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:17.957 19:56:49 -- pm/common@25 -- $ sleep 1 00:01:17.957 19:56:49 -- pm/common@21 -- $ date +%s 00:01:17.957 19:56:49 -- pm/common@21 -- $ date +%s 00:01:17.957 19:56:49 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733687809 00:01:17.957 19:56:49 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733687809 00:01:17.957 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733687809_collect-vmstat.pm.log 00:01:17.957 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733687809_collect-cpu-load.pm.log 00:01:18.897 19:56:50 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:01:18.897 19:56:50 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:18.897 19:56:50 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:18.897 19:56:50 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:18.897 19:56:50 -- spdk/autobuild.sh@16 -- $ date -u 00:01:18.897 Sun Dec 8 07:56:50 PM UTC 2024 00:01:18.897 19:56:50 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:18.897 v25.01-pre-311-ga2f5e1c2d 00:01:18.897 19:56:50 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:18.897 19:56:50 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:18.897 19:56:50 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:18.897 19:56:50 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:18.897 19:56:50 -- common/autotest_common.sh@10 -- $ set +x 00:01:18.897 ************************************ 00:01:18.897 START TEST asan 00:01:18.897 ************************************ 00:01:18.897 using asan 00:01:18.897 19:56:50 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:01:18.897 00:01:18.897 real 0m0.000s 00:01:18.897 user 0m0.000s 00:01:18.897 sys 0m0.000s 00:01:18.897 19:56:50 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:18.897 19:56:50 asan -- common/autotest_common.sh@10 -- $ set +x 00:01:18.897 ************************************ 00:01:18.897 END TEST asan 00:01:18.897 ************************************ 00:01:19.158 19:56:50 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:19.158 19:56:50 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:19.158 19:56:50 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:19.158 19:56:50 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:19.158 19:56:50 -- common/autotest_common.sh@10 -- $ set +x 00:01:19.158 ************************************ 00:01:19.158 START TEST ubsan 00:01:19.158 ************************************ 00:01:19.158 using ubsan 00:01:19.158 19:56:50 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:01:19.158 00:01:19.158 real 0m0.000s 00:01:19.158 user 0m0.000s 00:01:19.158 sys 0m0.000s 00:01:19.158 19:56:50 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:19.158 19:56:50 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:19.158 ************************************ 00:01:19.158 END TEST ubsan 00:01:19.158 ************************************ 00:01:19.158 19:56:50 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:19.158 19:56:50 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:19.158 19:56:50 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:19.158 19:56:50 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:19.158 19:56:50 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:19.158 19:56:50 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:19.158 19:56:50 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:19.158 19:56:50 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:19.158 19:56:50 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-shared 00:01:19.158 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:19.158 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:19.728 Using 'verbs' RDMA provider 00:01:35.562 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:01:53.661 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:01:53.661 Creating mk/config.mk...done. 00:01:53.661 Creating mk/cc.flags.mk...done. 00:01:53.661 Type 'make' to build. 00:01:53.661 19:57:23 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:01:53.661 19:57:23 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:53.661 19:57:23 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:53.661 19:57:23 -- common/autotest_common.sh@10 -- $ set +x 00:01:53.661 ************************************ 00:01:53.661 START TEST make 00:01:53.661 ************************************ 00:01:53.661 19:57:23 make -- common/autotest_common.sh@1129 -- $ make -j10 00:01:53.661 make[1]: Nothing to be done for 'all'. 00:02:01.851 The Meson build system 00:02:01.851 Version: 1.5.0 00:02:01.851 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:01.851 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:01.851 Build type: native build 00:02:01.851 Program cat found: YES (/usr/bin/cat) 00:02:01.851 Project name: DPDK 00:02:01.851 Project version: 24.03.0 00:02:01.851 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:01.851 C linker for the host machine: cc ld.bfd 2.40-14 00:02:01.851 Host machine cpu family: x86_64 00:02:01.851 Host machine cpu: x86_64 00:02:01.851 Message: ## Building in Developer Mode ## 00:02:01.851 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:01.851 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:01.851 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:01.851 Program python3 found: YES (/usr/bin/python3) 00:02:01.852 Program cat found: YES (/usr/bin/cat) 00:02:01.852 Compiler for C supports arguments -march=native: YES 00:02:01.852 Checking for size of "void *" : 8 00:02:01.852 Checking for size of "void *" : 8 (cached) 00:02:01.852 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:01.852 Library m found: YES 00:02:01.852 Library numa found: YES 00:02:01.852 Has header "numaif.h" : YES 00:02:01.852 Library fdt found: NO 00:02:01.852 Library execinfo found: NO 00:02:01.852 Has header "execinfo.h" : YES 00:02:01.852 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:01.852 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:01.852 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:01.852 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:01.852 Run-time dependency openssl found: YES 3.1.1 00:02:01.852 Run-time dependency libpcap found: YES 1.10.4 00:02:01.852 Has header "pcap.h" with dependency libpcap: YES 00:02:01.852 Compiler for C supports arguments -Wcast-qual: YES 00:02:01.852 Compiler for C supports arguments -Wdeprecated: YES 00:02:01.852 Compiler for C supports arguments -Wformat: YES 00:02:01.852 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:01.852 Compiler for C supports arguments -Wformat-security: NO 00:02:01.852 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:01.852 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:01.852 Compiler for C supports arguments -Wnested-externs: YES 00:02:01.852 Compiler for C supports arguments -Wold-style-definition: YES 00:02:01.852 Compiler for C supports arguments -Wpointer-arith: YES 00:02:01.852 Compiler for C supports arguments -Wsign-compare: YES 00:02:01.852 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:01.852 Compiler for C supports arguments -Wundef: YES 00:02:01.852 Compiler for C supports arguments -Wwrite-strings: YES 00:02:01.852 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:01.852 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:01.852 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:01.852 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:01.852 Program objdump found: YES (/usr/bin/objdump) 00:02:01.852 Compiler for C supports arguments -mavx512f: YES 00:02:01.852 Checking if "AVX512 checking" compiles: YES 00:02:01.852 Fetching value of define "__SSE4_2__" : 1 00:02:01.852 Fetching value of define "__AES__" : 1 00:02:01.852 Fetching value of define "__AVX__" : 1 00:02:01.852 Fetching value of define "__AVX2__" : 1 00:02:01.852 Fetching value of define "__AVX512BW__" : 1 00:02:01.852 Fetching value of define "__AVX512CD__" : 1 00:02:01.852 Fetching value of define "__AVX512DQ__" : 1 00:02:01.852 Fetching value of define "__AVX512F__" : 1 00:02:01.852 Fetching value of define "__AVX512VL__" : 1 00:02:01.852 Fetching value of define "__PCLMUL__" : 1 00:02:01.852 Fetching value of define "__RDRND__" : 1 00:02:01.852 Fetching value of define "__RDSEED__" : 1 00:02:01.852 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:01.852 Fetching value of define "__znver1__" : (undefined) 00:02:01.852 Fetching value of define "__znver2__" : (undefined) 00:02:01.852 Fetching value of define "__znver3__" : (undefined) 00:02:01.852 Fetching value of define "__znver4__" : (undefined) 00:02:01.852 Library asan found: YES 00:02:01.852 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:01.852 Message: lib/log: Defining dependency "log" 00:02:01.852 Message: lib/kvargs: Defining dependency "kvargs" 00:02:01.852 Message: lib/telemetry: Defining dependency "telemetry" 00:02:01.852 Library rt found: YES 00:02:01.852 Checking for function "getentropy" : NO 00:02:01.852 Message: lib/eal: Defining dependency "eal" 00:02:01.852 Message: lib/ring: Defining dependency "ring" 00:02:01.852 Message: lib/rcu: Defining dependency "rcu" 00:02:01.852 Message: lib/mempool: Defining dependency "mempool" 00:02:01.852 Message: lib/mbuf: Defining dependency "mbuf" 00:02:01.852 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:01.852 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:01.852 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:01.852 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:01.852 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:01.852 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:01.852 Compiler for C supports arguments -mpclmul: YES 00:02:01.852 Compiler for C supports arguments -maes: YES 00:02:01.852 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:01.852 Compiler for C supports arguments -mavx512bw: YES 00:02:01.852 Compiler for C supports arguments -mavx512dq: YES 00:02:01.852 Compiler for C supports arguments -mavx512vl: YES 00:02:01.852 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:01.852 Compiler for C supports arguments -mavx2: YES 00:02:01.852 Compiler for C supports arguments -mavx: YES 00:02:01.852 Message: lib/net: Defining dependency "net" 00:02:01.852 Message: lib/meter: Defining dependency "meter" 00:02:01.852 Message: lib/ethdev: Defining dependency "ethdev" 00:02:01.852 Message: lib/pci: Defining dependency "pci" 00:02:01.852 Message: lib/cmdline: Defining dependency "cmdline" 00:02:01.852 Message: lib/hash: Defining dependency "hash" 00:02:01.852 Message: lib/timer: Defining dependency "timer" 00:02:01.852 Message: lib/compressdev: Defining dependency "compressdev" 00:02:01.852 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:01.852 Message: lib/dmadev: Defining dependency "dmadev" 00:02:01.852 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:01.852 Message: lib/power: Defining dependency "power" 00:02:01.852 Message: lib/reorder: Defining dependency "reorder" 00:02:01.852 Message: lib/security: Defining dependency "security" 00:02:01.852 Has header "linux/userfaultfd.h" : YES 00:02:01.852 Has header "linux/vduse.h" : YES 00:02:01.852 Message: lib/vhost: Defining dependency "vhost" 00:02:01.852 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:01.852 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:01.852 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:01.852 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:01.852 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:01.852 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:01.852 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:01.852 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:01.852 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:01.852 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:01.852 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:01.852 Configuring doxy-api-html.conf using configuration 00:02:01.852 Configuring doxy-api-man.conf using configuration 00:02:01.852 Program mandb found: YES (/usr/bin/mandb) 00:02:01.852 Program sphinx-build found: NO 00:02:01.852 Configuring rte_build_config.h using configuration 00:02:01.852 Message: 00:02:01.852 ================= 00:02:01.852 Applications Enabled 00:02:01.852 ================= 00:02:01.852 00:02:01.852 apps: 00:02:01.852 00:02:01.852 00:02:01.852 Message: 00:02:01.852 ================= 00:02:01.852 Libraries Enabled 00:02:01.852 ================= 00:02:01.852 00:02:01.852 libs: 00:02:01.852 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:01.852 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:01.852 cryptodev, dmadev, power, reorder, security, vhost, 00:02:01.852 00:02:01.852 Message: 00:02:01.852 =============== 00:02:01.852 Drivers Enabled 00:02:01.852 =============== 00:02:01.852 00:02:01.852 common: 00:02:01.852 00:02:01.852 bus: 00:02:01.852 pci, vdev, 00:02:01.852 mempool: 00:02:01.852 ring, 00:02:01.852 dma: 00:02:01.852 00:02:01.852 net: 00:02:01.852 00:02:01.852 crypto: 00:02:01.852 00:02:01.852 compress: 00:02:01.852 00:02:01.852 vdpa: 00:02:01.852 00:02:01.852 00:02:01.852 Message: 00:02:01.852 ================= 00:02:01.852 Content Skipped 00:02:01.852 ================= 00:02:01.852 00:02:01.852 apps: 00:02:01.852 dumpcap: explicitly disabled via build config 00:02:01.852 graph: explicitly disabled via build config 00:02:01.852 pdump: explicitly disabled via build config 00:02:01.852 proc-info: explicitly disabled via build config 00:02:01.852 test-acl: explicitly disabled via build config 00:02:01.852 test-bbdev: explicitly disabled via build config 00:02:01.852 test-cmdline: explicitly disabled via build config 00:02:01.852 test-compress-perf: explicitly disabled via build config 00:02:01.852 test-crypto-perf: explicitly disabled via build config 00:02:01.852 test-dma-perf: explicitly disabled via build config 00:02:01.852 test-eventdev: explicitly disabled via build config 00:02:01.852 test-fib: explicitly disabled via build config 00:02:01.852 test-flow-perf: explicitly disabled via build config 00:02:01.852 test-gpudev: explicitly disabled via build config 00:02:01.852 test-mldev: explicitly disabled via build config 00:02:01.852 test-pipeline: explicitly disabled via build config 00:02:01.852 test-pmd: explicitly disabled via build config 00:02:01.852 test-regex: explicitly disabled via build config 00:02:01.852 test-sad: explicitly disabled via build config 00:02:01.852 test-security-perf: explicitly disabled via build config 00:02:01.852 00:02:01.852 libs: 00:02:01.852 argparse: explicitly disabled via build config 00:02:01.852 metrics: explicitly disabled via build config 00:02:01.852 acl: explicitly disabled via build config 00:02:01.852 bbdev: explicitly disabled via build config 00:02:01.852 bitratestats: explicitly disabled via build config 00:02:01.852 bpf: explicitly disabled via build config 00:02:01.852 cfgfile: explicitly disabled via build config 00:02:01.852 distributor: explicitly disabled via build config 00:02:01.852 efd: explicitly disabled via build config 00:02:01.852 eventdev: explicitly disabled via build config 00:02:01.852 dispatcher: explicitly disabled via build config 00:02:01.853 gpudev: explicitly disabled via build config 00:02:01.853 gro: explicitly disabled via build config 00:02:01.853 gso: explicitly disabled via build config 00:02:01.853 ip_frag: explicitly disabled via build config 00:02:01.853 jobstats: explicitly disabled via build config 00:02:01.853 latencystats: explicitly disabled via build config 00:02:01.853 lpm: explicitly disabled via build config 00:02:01.853 member: explicitly disabled via build config 00:02:01.853 pcapng: explicitly disabled via build config 00:02:01.853 rawdev: explicitly disabled via build config 00:02:01.853 regexdev: explicitly disabled via build config 00:02:01.853 mldev: explicitly disabled via build config 00:02:01.853 rib: explicitly disabled via build config 00:02:01.853 sched: explicitly disabled via build config 00:02:01.853 stack: explicitly disabled via build config 00:02:01.853 ipsec: explicitly disabled via build config 00:02:01.853 pdcp: explicitly disabled via build config 00:02:01.853 fib: explicitly disabled via build config 00:02:01.853 port: explicitly disabled via build config 00:02:01.853 pdump: explicitly disabled via build config 00:02:01.853 table: explicitly disabled via build config 00:02:01.853 pipeline: explicitly disabled via build config 00:02:01.853 graph: explicitly disabled via build config 00:02:01.853 node: explicitly disabled via build config 00:02:01.853 00:02:01.853 drivers: 00:02:01.853 common/cpt: not in enabled drivers build config 00:02:01.853 common/dpaax: not in enabled drivers build config 00:02:01.853 common/iavf: not in enabled drivers build config 00:02:01.853 common/idpf: not in enabled drivers build config 00:02:01.853 common/ionic: not in enabled drivers build config 00:02:01.853 common/mvep: not in enabled drivers build config 00:02:01.853 common/octeontx: not in enabled drivers build config 00:02:01.853 bus/auxiliary: not in enabled drivers build config 00:02:01.853 bus/cdx: not in enabled drivers build config 00:02:01.853 bus/dpaa: not in enabled drivers build config 00:02:01.853 bus/fslmc: not in enabled drivers build config 00:02:01.853 bus/ifpga: not in enabled drivers build config 00:02:01.853 bus/platform: not in enabled drivers build config 00:02:01.853 bus/uacce: not in enabled drivers build config 00:02:01.853 bus/vmbus: not in enabled drivers build config 00:02:01.853 common/cnxk: not in enabled drivers build config 00:02:01.853 common/mlx5: not in enabled drivers build config 00:02:01.853 common/nfp: not in enabled drivers build config 00:02:01.853 common/nitrox: not in enabled drivers build config 00:02:01.853 common/qat: not in enabled drivers build config 00:02:01.853 common/sfc_efx: not in enabled drivers build config 00:02:01.853 mempool/bucket: not in enabled drivers build config 00:02:01.853 mempool/cnxk: not in enabled drivers build config 00:02:01.853 mempool/dpaa: not in enabled drivers build config 00:02:01.853 mempool/dpaa2: not in enabled drivers build config 00:02:01.853 mempool/octeontx: not in enabled drivers build config 00:02:01.853 mempool/stack: not in enabled drivers build config 00:02:01.853 dma/cnxk: not in enabled drivers build config 00:02:01.853 dma/dpaa: not in enabled drivers build config 00:02:01.853 dma/dpaa2: not in enabled drivers build config 00:02:01.853 dma/hisilicon: not in enabled drivers build config 00:02:01.853 dma/idxd: not in enabled drivers build config 00:02:01.853 dma/ioat: not in enabled drivers build config 00:02:01.853 dma/skeleton: not in enabled drivers build config 00:02:01.853 net/af_packet: not in enabled drivers build config 00:02:01.853 net/af_xdp: not in enabled drivers build config 00:02:01.853 net/ark: not in enabled drivers build config 00:02:01.853 net/atlantic: not in enabled drivers build config 00:02:01.853 net/avp: not in enabled drivers build config 00:02:01.853 net/axgbe: not in enabled drivers build config 00:02:01.853 net/bnx2x: not in enabled drivers build config 00:02:01.853 net/bnxt: not in enabled drivers build config 00:02:01.853 net/bonding: not in enabled drivers build config 00:02:01.853 net/cnxk: not in enabled drivers build config 00:02:01.853 net/cpfl: not in enabled drivers build config 00:02:01.853 net/cxgbe: not in enabled drivers build config 00:02:01.853 net/dpaa: not in enabled drivers build config 00:02:01.853 net/dpaa2: not in enabled drivers build config 00:02:01.853 net/e1000: not in enabled drivers build config 00:02:01.853 net/ena: not in enabled drivers build config 00:02:01.853 net/enetc: not in enabled drivers build config 00:02:01.853 net/enetfec: not in enabled drivers build config 00:02:01.853 net/enic: not in enabled drivers build config 00:02:01.853 net/failsafe: not in enabled drivers build config 00:02:01.853 net/fm10k: not in enabled drivers build config 00:02:01.853 net/gve: not in enabled drivers build config 00:02:01.853 net/hinic: not in enabled drivers build config 00:02:01.853 net/hns3: not in enabled drivers build config 00:02:01.853 net/i40e: not in enabled drivers build config 00:02:01.853 net/iavf: not in enabled drivers build config 00:02:01.853 net/ice: not in enabled drivers build config 00:02:01.853 net/idpf: not in enabled drivers build config 00:02:01.853 net/igc: not in enabled drivers build config 00:02:01.853 net/ionic: not in enabled drivers build config 00:02:01.853 net/ipn3ke: not in enabled drivers build config 00:02:01.853 net/ixgbe: not in enabled drivers build config 00:02:01.853 net/mana: not in enabled drivers build config 00:02:01.853 net/memif: not in enabled drivers build config 00:02:01.853 net/mlx4: not in enabled drivers build config 00:02:01.853 net/mlx5: not in enabled drivers build config 00:02:01.853 net/mvneta: not in enabled drivers build config 00:02:01.853 net/mvpp2: not in enabled drivers build config 00:02:01.853 net/netvsc: not in enabled drivers build config 00:02:01.853 net/nfb: not in enabled drivers build config 00:02:01.853 net/nfp: not in enabled drivers build config 00:02:01.853 net/ngbe: not in enabled drivers build config 00:02:01.853 net/null: not in enabled drivers build config 00:02:01.853 net/octeontx: not in enabled drivers build config 00:02:01.853 net/octeon_ep: not in enabled drivers build config 00:02:01.853 net/pcap: not in enabled drivers build config 00:02:01.853 net/pfe: not in enabled drivers build config 00:02:01.853 net/qede: not in enabled drivers build config 00:02:01.853 net/ring: not in enabled drivers build config 00:02:01.853 net/sfc: not in enabled drivers build config 00:02:01.853 net/softnic: not in enabled drivers build config 00:02:01.853 net/tap: not in enabled drivers build config 00:02:01.853 net/thunderx: not in enabled drivers build config 00:02:01.853 net/txgbe: not in enabled drivers build config 00:02:01.853 net/vdev_netvsc: not in enabled drivers build config 00:02:01.853 net/vhost: not in enabled drivers build config 00:02:01.853 net/virtio: not in enabled drivers build config 00:02:01.853 net/vmxnet3: not in enabled drivers build config 00:02:01.853 raw/*: missing internal dependency, "rawdev" 00:02:01.853 crypto/armv8: not in enabled drivers build config 00:02:01.853 crypto/bcmfs: not in enabled drivers build config 00:02:01.853 crypto/caam_jr: not in enabled drivers build config 00:02:01.853 crypto/ccp: not in enabled drivers build config 00:02:01.853 crypto/cnxk: not in enabled drivers build config 00:02:01.853 crypto/dpaa_sec: not in enabled drivers build config 00:02:01.853 crypto/dpaa2_sec: not in enabled drivers build config 00:02:01.853 crypto/ipsec_mb: not in enabled drivers build config 00:02:01.853 crypto/mlx5: not in enabled drivers build config 00:02:01.853 crypto/mvsam: not in enabled drivers build config 00:02:01.853 crypto/nitrox: not in enabled drivers build config 00:02:01.853 crypto/null: not in enabled drivers build config 00:02:01.853 crypto/octeontx: not in enabled drivers build config 00:02:01.853 crypto/openssl: not in enabled drivers build config 00:02:01.853 crypto/scheduler: not in enabled drivers build config 00:02:01.853 crypto/uadk: not in enabled drivers build config 00:02:01.853 crypto/virtio: not in enabled drivers build config 00:02:01.853 compress/isal: not in enabled drivers build config 00:02:01.853 compress/mlx5: not in enabled drivers build config 00:02:01.853 compress/nitrox: not in enabled drivers build config 00:02:01.853 compress/octeontx: not in enabled drivers build config 00:02:01.853 compress/zlib: not in enabled drivers build config 00:02:01.853 regex/*: missing internal dependency, "regexdev" 00:02:01.853 ml/*: missing internal dependency, "mldev" 00:02:01.853 vdpa/ifc: not in enabled drivers build config 00:02:01.853 vdpa/mlx5: not in enabled drivers build config 00:02:01.853 vdpa/nfp: not in enabled drivers build config 00:02:01.853 vdpa/sfc: not in enabled drivers build config 00:02:01.853 event/*: missing internal dependency, "eventdev" 00:02:01.853 baseband/*: missing internal dependency, "bbdev" 00:02:01.853 gpu/*: missing internal dependency, "gpudev" 00:02:01.853 00:02:01.853 00:02:01.853 Build targets in project: 85 00:02:01.853 00:02:01.853 DPDK 24.03.0 00:02:01.853 00:02:01.853 User defined options 00:02:01.853 buildtype : debug 00:02:01.853 default_library : shared 00:02:01.853 libdir : lib 00:02:01.853 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:01.853 b_sanitize : address 00:02:01.853 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:01.853 c_link_args : 00:02:01.853 cpu_instruction_set: native 00:02:01.853 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:01.853 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:01.853 enable_docs : false 00:02:01.853 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:02:01.853 enable_kmods : false 00:02:01.853 max_lcores : 128 00:02:01.853 tests : false 00:02:01.853 00:02:01.853 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:02.114 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:02.114 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:02.114 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:02.114 [3/268] Linking static target lib/librte_kvargs.a 00:02:02.114 [4/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:02.114 [5/268] Linking static target lib/librte_log.a 00:02:02.114 [6/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:02.374 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:02.633 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:02.633 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:02.633 [10/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.633 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:02.633 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:02.633 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:02.633 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:02.894 [15/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:02.894 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:02.894 [17/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:02.894 [18/268] Linking static target lib/librte_telemetry.a 00:02:02.894 [19/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.154 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:03.154 [21/268] Linking target lib/librte_log.so.24.1 00:02:03.154 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:03.154 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:03.154 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:03.154 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:03.154 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:03.154 [27/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:03.413 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:03.413 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:03.413 [30/268] Linking target lib/librte_kvargs.so.24.1 00:02:03.413 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:03.413 [32/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:03.672 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:03.672 [34/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.672 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:03.672 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:03.672 [37/268] Linking target lib/librte_telemetry.so.24.1 00:02:03.672 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:03.672 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:03.931 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:03.931 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:03.931 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:03.931 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:03.931 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:03.931 [45/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:03.931 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:04.190 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:04.190 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:04.450 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:04.450 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:04.450 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:04.450 [52/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:04.450 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:04.450 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:04.450 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:04.450 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:04.708 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:04.709 [58/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:04.709 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:04.709 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:04.709 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:04.968 [62/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:04.968 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:04.968 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:04.968 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:04.968 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:05.226 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:05.226 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:05.486 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:05.486 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:05.486 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:05.486 [72/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:05.486 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:05.745 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:05.745 [75/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:05.745 [76/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:05.745 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:05.745 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:05.745 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:05.745 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:06.005 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:06.005 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:06.005 [83/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:06.262 [84/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:06.262 [85/268] Linking static target lib/librte_eal.a 00:02:06.262 [86/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:06.262 [87/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:06.262 [88/268] Linking static target lib/librte_ring.a 00:02:06.262 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:06.262 [90/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:06.262 [91/268] Linking static target lib/librte_rcu.a 00:02:06.521 [92/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:06.521 [93/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:06.521 [94/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:06.521 [95/268] Linking static target lib/librte_mempool.a 00:02:06.521 [96/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:06.781 [97/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:06.781 [98/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.781 [99/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:06.781 [100/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:06.781 [101/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:07.039 [102/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.039 [103/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:07.039 [104/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:07.039 [105/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:07.298 [106/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:07.298 [107/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:07.298 [108/268] Linking static target lib/librte_net.a 00:02:07.298 [109/268] Linking static target lib/librte_meter.a 00:02:07.298 [110/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:07.298 [111/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:07.298 [112/268] Linking static target lib/librte_mbuf.a 00:02:07.557 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:07.557 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:07.557 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:07.557 [116/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.557 [117/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.815 [118/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.074 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:08.074 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:08.345 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:08.345 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:08.644 [123/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.644 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:08.644 [125/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:08.644 [126/268] Linking static target lib/librte_pci.a 00:02:08.644 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:08.907 [128/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:08.907 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:08.907 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:08.907 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:09.166 [132/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.167 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:09.167 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:09.167 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:09.167 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:09.167 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:09.167 [138/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:09.167 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:09.167 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:09.167 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:09.167 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:09.167 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:09.167 [144/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:09.426 [145/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:09.426 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:09.426 [147/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:09.685 [148/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:09.685 [149/268] Linking static target lib/librte_cmdline.a 00:02:09.685 [150/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:09.685 [151/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:09.685 [152/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:09.945 [153/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:09.945 [154/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:09.945 [155/268] Linking static target lib/librte_timer.a 00:02:09.945 [156/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:09.945 [157/268] Linking static target lib/librte_ethdev.a 00:02:09.945 [158/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:10.204 [159/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:10.204 [160/268] Linking static target lib/librte_compressdev.a 00:02:10.204 [161/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:10.464 [162/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:10.464 [163/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:10.464 [164/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:10.464 [165/268] Linking static target lib/librte_dmadev.a 00:02:10.464 [166/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:10.464 [167/268] Linking static target lib/librte_hash.a 00:02:10.723 [168/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.724 [169/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:10.724 [170/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:10.724 [171/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:10.724 [172/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:10.984 [173/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.984 [174/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:10.984 [175/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.244 [176/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.244 [177/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:11.244 [178/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:11.244 [179/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:11.504 [180/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:11.504 [181/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:11.504 [182/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.504 [183/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:11.504 [184/268] Linking static target lib/librte_cryptodev.a 00:02:11.763 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:11.763 [186/268] Linking static target lib/librte_power.a 00:02:11.763 [187/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:12.023 [188/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:12.023 [189/268] Linking static target lib/librte_reorder.a 00:02:12.023 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:12.023 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:12.023 [192/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:12.023 [193/268] Linking static target lib/librte_security.a 00:02:12.284 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.548 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:12.548 [196/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.810 [197/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.810 [198/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:12.810 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:13.071 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:13.340 [201/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:13.340 [202/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:13.340 [203/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:13.340 [204/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:13.340 [205/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:13.604 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:13.604 [207/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:13.604 [208/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:13.604 [209/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:13.604 [210/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:13.864 [211/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.864 [212/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:13.864 [213/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:13.864 [214/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:13.864 [215/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:13.864 [216/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:13.864 [217/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:13.864 [218/268] Linking static target drivers/librte_bus_vdev.a 00:02:13.864 [219/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:13.864 [220/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:13.864 [221/268] Linking static target drivers/librte_bus_pci.a 00:02:14.123 [222/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:14.123 [223/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:14.123 [224/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:14.123 [225/268] Linking static target drivers/librte_mempool_ring.a 00:02:14.123 [226/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.383 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.321 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:16.714 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.714 [230/268] Linking target lib/librte_eal.so.24.1 00:02:16.714 [231/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:16.714 [232/268] Linking target lib/librte_pci.so.24.1 00:02:16.714 [233/268] Linking target lib/librte_meter.so.24.1 00:02:16.714 [234/268] Linking target lib/librte_ring.so.24.1 00:02:16.714 [235/268] Linking target lib/librte_dmadev.so.24.1 00:02:16.714 [236/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:16.714 [237/268] Linking target lib/librte_timer.so.24.1 00:02:16.714 [238/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:16.714 [239/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:16.714 [240/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:16.714 [241/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:16.714 [242/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:16.714 [243/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:16.973 [244/268] Linking target lib/librte_rcu.so.24.1 00:02:16.973 [245/268] Linking target lib/librte_mempool.so.24.1 00:02:16.973 [246/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:16.973 [247/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:16.973 [248/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:16.973 [249/268] Linking target lib/librte_mbuf.so.24.1 00:02:17.232 [250/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:17.232 [251/268] Linking target lib/librte_reorder.so.24.1 00:02:17.232 [252/268] Linking target lib/librte_net.so.24.1 00:02:17.232 [253/268] Linking target lib/librte_cryptodev.so.24.1 00:02:17.232 [254/268] Linking target lib/librte_compressdev.so.24.1 00:02:17.232 [255/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:17.232 [256/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:17.490 [257/268] Linking target lib/librte_hash.so.24.1 00:02:17.490 [258/268] Linking target lib/librte_cmdline.so.24.1 00:02:17.490 [259/268] Linking target lib/librte_security.so.24.1 00:02:17.490 [260/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:18.057 [261/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.057 [262/268] Linking target lib/librte_ethdev.so.24.1 00:02:18.057 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:18.316 [264/268] Linking target lib/librte_power.so.24.1 00:02:18.576 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:18.835 [266/268] Linking static target lib/librte_vhost.a 00:02:21.411 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.411 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:21.411 INFO: autodetecting backend as ninja 00:02:21.411 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:02:39.523 CC lib/log/log.o 00:02:39.523 CC lib/log/log_flags.o 00:02:39.523 CC lib/log/log_deprecated.o 00:02:39.523 CC lib/ut/ut.o 00:02:39.523 CC lib/ut_mock/mock.o 00:02:39.523 LIB libspdk_ut.a 00:02:39.523 LIB libspdk_log.a 00:02:39.523 LIB libspdk_ut_mock.a 00:02:39.523 SO libspdk_ut.so.2.0 00:02:39.523 SO libspdk_log.so.7.1 00:02:39.523 SO libspdk_ut_mock.so.6.0 00:02:39.523 SYMLINK libspdk_ut.so 00:02:39.523 SYMLINK libspdk_log.so 00:02:39.523 SYMLINK libspdk_ut_mock.so 00:02:39.523 CC lib/ioat/ioat.o 00:02:39.523 CC lib/util/base64.o 00:02:39.523 CC lib/util/cpuset.o 00:02:39.523 CC lib/util/crc16.o 00:02:39.523 CC lib/util/crc32.o 00:02:39.523 CC lib/util/bit_array.o 00:02:39.523 CC lib/util/crc32c.o 00:02:39.523 CC lib/dma/dma.o 00:02:39.523 CXX lib/trace_parser/trace.o 00:02:39.523 CC lib/vfio_user/host/vfio_user_pci.o 00:02:39.523 CC lib/vfio_user/host/vfio_user.o 00:02:39.523 CC lib/util/crc32_ieee.o 00:02:39.523 CC lib/util/crc64.o 00:02:39.523 CC lib/util/dif.o 00:02:39.523 CC lib/util/fd.o 00:02:39.523 LIB libspdk_dma.a 00:02:39.523 SO libspdk_dma.so.5.0 00:02:39.523 CC lib/util/fd_group.o 00:02:39.523 CC lib/util/file.o 00:02:39.523 CC lib/util/hexlify.o 00:02:39.523 LIB libspdk_ioat.a 00:02:39.523 SYMLINK libspdk_dma.so 00:02:39.523 CC lib/util/iov.o 00:02:39.523 SO libspdk_ioat.so.7.0 00:02:39.523 CC lib/util/math.o 00:02:39.523 CC lib/util/net.o 00:02:39.523 SYMLINK libspdk_ioat.so 00:02:39.523 CC lib/util/pipe.o 00:02:39.523 CC lib/util/strerror_tls.o 00:02:39.523 LIB libspdk_vfio_user.a 00:02:39.523 CC lib/util/string.o 00:02:39.523 SO libspdk_vfio_user.so.5.0 00:02:39.523 CC lib/util/uuid.o 00:02:39.523 CC lib/util/xor.o 00:02:39.523 SYMLINK libspdk_vfio_user.so 00:02:39.523 CC lib/util/zipf.o 00:02:39.523 CC lib/util/md5.o 00:02:39.523 LIB libspdk_util.a 00:02:39.523 SO libspdk_util.so.10.1 00:02:39.523 LIB libspdk_trace_parser.a 00:02:39.523 SO libspdk_trace_parser.so.6.0 00:02:39.523 SYMLINK libspdk_util.so 00:02:39.523 SYMLINK libspdk_trace_parser.so 00:02:39.523 CC lib/env_dpdk/env.o 00:02:39.523 CC lib/env_dpdk/memory.o 00:02:39.523 CC lib/env_dpdk/pci.o 00:02:39.523 CC lib/env_dpdk/init.o 00:02:39.523 CC lib/rdma_utils/rdma_utils.o 00:02:39.523 CC lib/env_dpdk/threads.o 00:02:39.523 CC lib/vmd/vmd.o 00:02:39.523 CC lib/json/json_parse.o 00:02:39.523 CC lib/idxd/idxd.o 00:02:39.523 CC lib/conf/conf.o 00:02:39.523 CC lib/idxd/idxd_user.o 00:02:39.523 LIB libspdk_conf.a 00:02:39.523 SO libspdk_conf.so.6.0 00:02:39.523 LIB libspdk_rdma_utils.a 00:02:39.523 CC lib/json/json_util.o 00:02:39.523 SO libspdk_rdma_utils.so.1.0 00:02:39.523 SYMLINK libspdk_conf.so 00:02:39.523 CC lib/json/json_write.o 00:02:39.523 SYMLINK libspdk_rdma_utils.so 00:02:39.523 CC lib/idxd/idxd_kernel.o 00:02:39.523 CC lib/env_dpdk/pci_ioat.o 00:02:39.523 CC lib/env_dpdk/pci_virtio.o 00:02:39.523 CC lib/vmd/led.o 00:02:39.523 CC lib/env_dpdk/pci_vmd.o 00:02:39.523 CC lib/env_dpdk/pci_idxd.o 00:02:39.523 CC lib/env_dpdk/pci_event.o 00:02:39.523 CC lib/env_dpdk/sigbus_handler.o 00:02:39.523 LIB libspdk_json.a 00:02:39.523 CC lib/env_dpdk/pci_dpdk.o 00:02:39.523 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:39.523 SO libspdk_json.so.6.0 00:02:39.523 CC lib/rdma_provider/common.o 00:02:39.523 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:39.523 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:39.523 LIB libspdk_idxd.a 00:02:39.523 SYMLINK libspdk_json.so 00:02:39.523 LIB libspdk_vmd.a 00:02:39.523 SO libspdk_idxd.so.12.1 00:02:39.523 SO libspdk_vmd.so.6.0 00:02:39.523 SYMLINK libspdk_idxd.so 00:02:39.523 SYMLINK libspdk_vmd.so 00:02:39.783 CC lib/jsonrpc/jsonrpc_server.o 00:02:39.783 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:39.783 CC lib/jsonrpc/jsonrpc_client.o 00:02:39.783 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:39.783 LIB libspdk_rdma_provider.a 00:02:39.783 SO libspdk_rdma_provider.so.7.0 00:02:39.783 SYMLINK libspdk_rdma_provider.so 00:02:40.043 LIB libspdk_jsonrpc.a 00:02:40.043 SO libspdk_jsonrpc.so.6.0 00:02:40.043 SYMLINK libspdk_jsonrpc.so 00:02:40.302 LIB libspdk_env_dpdk.a 00:02:40.562 CC lib/rpc/rpc.o 00:02:40.562 SO libspdk_env_dpdk.so.15.1 00:02:40.562 SYMLINK libspdk_env_dpdk.so 00:02:40.562 LIB libspdk_rpc.a 00:02:40.822 SO libspdk_rpc.so.6.0 00:02:40.822 SYMLINK libspdk_rpc.so 00:02:41.081 CC lib/trace/trace.o 00:02:41.081 CC lib/trace/trace_flags.o 00:02:41.081 CC lib/trace/trace_rpc.o 00:02:41.081 CC lib/notify/notify.o 00:02:41.081 CC lib/notify/notify_rpc.o 00:02:41.081 CC lib/keyring/keyring.o 00:02:41.081 CC lib/keyring/keyring_rpc.o 00:02:41.339 LIB libspdk_notify.a 00:02:41.339 SO libspdk_notify.so.6.0 00:02:41.339 LIB libspdk_trace.a 00:02:41.339 LIB libspdk_keyring.a 00:02:41.339 SYMLINK libspdk_notify.so 00:02:41.339 SO libspdk_trace.so.11.0 00:02:41.339 SO libspdk_keyring.so.2.0 00:02:41.597 SYMLINK libspdk_trace.so 00:02:41.597 SYMLINK libspdk_keyring.so 00:02:41.856 CC lib/sock/sock.o 00:02:41.856 CC lib/sock/sock_rpc.o 00:02:41.856 CC lib/thread/thread.o 00:02:41.856 CC lib/thread/iobuf.o 00:02:42.424 LIB libspdk_sock.a 00:02:42.424 SO libspdk_sock.so.10.0 00:02:42.424 SYMLINK libspdk_sock.so 00:02:42.683 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:42.683 CC lib/nvme/nvme_ctrlr.o 00:02:42.683 CC lib/nvme/nvme_fabric.o 00:02:42.683 CC lib/nvme/nvme_ns_cmd.o 00:02:42.683 CC lib/nvme/nvme_ns.o 00:02:42.683 CC lib/nvme/nvme_pcie_common.o 00:02:42.683 CC lib/nvme/nvme_pcie.o 00:02:42.683 CC lib/nvme/nvme.o 00:02:42.683 CC lib/nvme/nvme_qpair.o 00:02:43.619 CC lib/nvme/nvme_quirks.o 00:02:43.619 CC lib/nvme/nvme_transport.o 00:02:43.619 LIB libspdk_thread.a 00:02:43.619 SO libspdk_thread.so.11.0 00:02:43.619 CC lib/nvme/nvme_discovery.o 00:02:43.619 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:43.619 SYMLINK libspdk_thread.so 00:02:43.619 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:43.619 CC lib/nvme/nvme_tcp.o 00:02:43.619 CC lib/nvme/nvme_opal.o 00:02:43.619 CC lib/nvme/nvme_io_msg.o 00:02:43.878 CC lib/nvme/nvme_poll_group.o 00:02:43.878 CC lib/nvme/nvme_zns.o 00:02:43.878 CC lib/nvme/nvme_stubs.o 00:02:43.878 CC lib/nvme/nvme_auth.o 00:02:44.137 CC lib/nvme/nvme_cuse.o 00:02:44.137 CC lib/nvme/nvme_rdma.o 00:02:44.137 CC lib/accel/accel.o 00:02:44.398 CC lib/blob/blobstore.o 00:02:44.657 CC lib/init/json_config.o 00:02:44.657 CC lib/fsdev/fsdev.o 00:02:44.657 CC lib/virtio/virtio.o 00:02:44.657 CC lib/init/subsystem.o 00:02:44.916 CC lib/virtio/virtio_vhost_user.o 00:02:44.916 CC lib/virtio/virtio_vfio_user.o 00:02:44.916 CC lib/virtio/virtio_pci.o 00:02:44.916 CC lib/init/subsystem_rpc.o 00:02:44.916 CC lib/init/rpc.o 00:02:45.175 CC lib/fsdev/fsdev_io.o 00:02:45.175 CC lib/fsdev/fsdev_rpc.o 00:02:45.175 CC lib/blob/request.o 00:02:45.175 CC lib/blob/zeroes.o 00:02:45.175 CC lib/accel/accel_rpc.o 00:02:45.175 LIB libspdk_init.a 00:02:45.175 CC lib/accel/accel_sw.o 00:02:45.175 LIB libspdk_virtio.a 00:02:45.175 SO libspdk_init.so.6.0 00:02:45.175 SO libspdk_virtio.so.7.0 00:02:45.175 SYMLINK libspdk_init.so 00:02:45.176 SYMLINK libspdk_virtio.so 00:02:45.176 CC lib/blob/blob_bs_dev.o 00:02:45.436 LIB libspdk_fsdev.a 00:02:45.436 SO libspdk_fsdev.so.2.0 00:02:45.436 CC lib/event/app.o 00:02:45.436 CC lib/event/reactor.o 00:02:45.436 CC lib/event/scheduler_static.o 00:02:45.436 CC lib/event/app_rpc.o 00:02:45.436 CC lib/event/log_rpc.o 00:02:45.436 SYMLINK libspdk_fsdev.so 00:02:45.436 LIB libspdk_accel.a 00:02:45.696 SO libspdk_accel.so.16.0 00:02:45.696 LIB libspdk_nvme.a 00:02:45.696 SYMLINK libspdk_accel.so 00:02:45.696 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:02:45.696 SO libspdk_nvme.so.15.0 00:02:45.955 CC lib/bdev/bdev.o 00:02:45.955 CC lib/bdev/bdev_rpc.o 00:02:45.955 CC lib/bdev/bdev_zone.o 00:02:45.955 CC lib/bdev/part.o 00:02:45.955 CC lib/bdev/scsi_nvme.o 00:02:45.956 SYMLINK libspdk_nvme.so 00:02:45.956 LIB libspdk_event.a 00:02:46.215 SO libspdk_event.so.14.0 00:02:46.215 SYMLINK libspdk_event.so 00:02:46.474 LIB libspdk_fuse_dispatcher.a 00:02:46.474 SO libspdk_fuse_dispatcher.so.1.0 00:02:46.474 SYMLINK libspdk_fuse_dispatcher.so 00:02:47.855 LIB libspdk_blob.a 00:02:47.855 SO libspdk_blob.so.12.0 00:02:47.855 SYMLINK libspdk_blob.so 00:02:48.115 CC lib/blobfs/blobfs.o 00:02:48.115 CC lib/blobfs/tree.o 00:02:48.376 CC lib/lvol/lvol.o 00:02:48.636 LIB libspdk_bdev.a 00:02:48.636 SO libspdk_bdev.so.17.0 00:02:48.636 SYMLINK libspdk_bdev.so 00:02:48.896 CC lib/ftl/ftl_core.o 00:02:48.896 CC lib/ftl/ftl_init.o 00:02:48.896 CC lib/ftl/ftl_layout.o 00:02:48.896 CC lib/ftl/ftl_debug.o 00:02:48.896 CC lib/nbd/nbd.o 00:02:48.896 CC lib/ublk/ublk.o 00:02:48.896 CC lib/nvmf/ctrlr.o 00:02:48.896 CC lib/scsi/dev.o 00:02:49.155 LIB libspdk_blobfs.a 00:02:49.155 SO libspdk_blobfs.so.11.0 00:02:49.155 SYMLINK libspdk_blobfs.so 00:02:49.155 CC lib/scsi/lun.o 00:02:49.155 CC lib/scsi/port.o 00:02:49.155 LIB libspdk_lvol.a 00:02:49.155 CC lib/scsi/scsi.o 00:02:49.155 CC lib/nbd/nbd_rpc.o 00:02:49.155 SO libspdk_lvol.so.11.0 00:02:49.415 SYMLINK libspdk_lvol.so 00:02:49.415 CC lib/nvmf/ctrlr_discovery.o 00:02:49.415 CC lib/nvmf/ctrlr_bdev.o 00:02:49.415 CC lib/nvmf/subsystem.o 00:02:49.415 CC lib/scsi/scsi_bdev.o 00:02:49.415 CC lib/ftl/ftl_io.o 00:02:49.415 CC lib/ftl/ftl_sb.o 00:02:49.415 LIB libspdk_nbd.a 00:02:49.415 SO libspdk_nbd.so.7.0 00:02:49.415 CC lib/nvmf/nvmf.o 00:02:49.415 SYMLINK libspdk_nbd.so 00:02:49.415 CC lib/nvmf/nvmf_rpc.o 00:02:49.675 CC lib/ublk/ublk_rpc.o 00:02:49.675 CC lib/ftl/ftl_l2p.o 00:02:49.675 CC lib/ftl/ftl_l2p_flat.o 00:02:49.675 LIB libspdk_ublk.a 00:02:49.675 SO libspdk_ublk.so.3.0 00:02:49.675 CC lib/ftl/ftl_nv_cache.o 00:02:49.935 CC lib/scsi/scsi_pr.o 00:02:49.935 SYMLINK libspdk_ublk.so 00:02:49.935 CC lib/nvmf/transport.o 00:02:49.935 CC lib/ftl/ftl_band.o 00:02:49.935 CC lib/ftl/ftl_band_ops.o 00:02:49.935 CC lib/nvmf/tcp.o 00:02:50.195 CC lib/scsi/scsi_rpc.o 00:02:50.195 CC lib/nvmf/stubs.o 00:02:50.195 CC lib/ftl/ftl_writer.o 00:02:50.195 CC lib/scsi/task.o 00:02:50.195 CC lib/ftl/ftl_rq.o 00:02:50.455 CC lib/ftl/ftl_reloc.o 00:02:50.455 LIB libspdk_scsi.a 00:02:50.455 CC lib/nvmf/mdns_server.o 00:02:50.455 CC lib/nvmf/rdma.o 00:02:50.455 CC lib/ftl/ftl_l2p_cache.o 00:02:50.455 SO libspdk_scsi.so.9.0 00:02:50.717 CC lib/nvmf/auth.o 00:02:50.717 SYMLINK libspdk_scsi.so 00:02:50.717 CC lib/ftl/ftl_p2l.o 00:02:50.717 CC lib/ftl/ftl_p2l_log.o 00:02:50.717 CC lib/iscsi/conn.o 00:02:50.717 CC lib/ftl/mngt/ftl_mngt.o 00:02:50.989 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:50.989 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:50.989 CC lib/iscsi/init_grp.o 00:02:50.989 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:50.989 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:51.288 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:51.288 CC lib/iscsi/iscsi.o 00:02:51.288 CC lib/iscsi/param.o 00:02:51.288 CC lib/iscsi/portal_grp.o 00:02:51.288 CC lib/vhost/vhost.o 00:02:51.288 CC lib/vhost/vhost_rpc.o 00:02:51.288 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:51.288 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:51.548 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:51.548 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:51.548 CC lib/vhost/vhost_scsi.o 00:02:51.548 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:51.548 CC lib/vhost/vhost_blk.o 00:02:51.809 CC lib/vhost/rte_vhost_user.o 00:02:51.809 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:51.809 CC lib/iscsi/tgt_node.o 00:02:51.809 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:52.069 CC lib/ftl/utils/ftl_conf.o 00:02:52.069 CC lib/ftl/utils/ftl_md.o 00:02:52.069 CC lib/ftl/utils/ftl_mempool.o 00:02:52.069 CC lib/ftl/utils/ftl_bitmap.o 00:02:52.329 CC lib/ftl/utils/ftl_property.o 00:02:52.329 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:52.329 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:52.329 CC lib/iscsi/iscsi_subsystem.o 00:02:52.329 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:52.329 CC lib/iscsi/iscsi_rpc.o 00:02:52.589 CC lib/iscsi/task.o 00:02:52.589 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:52.589 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:52.589 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:52.589 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:52.589 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:52.589 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:52.589 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:52.589 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:52.848 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:02:52.848 LIB libspdk_vhost.a 00:02:52.848 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:02:52.848 SO libspdk_vhost.so.8.0 00:02:52.848 CC lib/ftl/base/ftl_base_dev.o 00:02:52.848 CC lib/ftl/base/ftl_base_bdev.o 00:02:52.848 LIB libspdk_nvmf.a 00:02:52.848 CC lib/ftl/ftl_trace.o 00:02:52.848 LIB libspdk_iscsi.a 00:02:52.848 SYMLINK libspdk_vhost.so 00:02:53.108 SO libspdk_iscsi.so.8.0 00:02:53.108 SO libspdk_nvmf.so.20.0 00:02:53.108 SYMLINK libspdk_iscsi.so 00:02:53.108 LIB libspdk_ftl.a 00:02:53.108 SYMLINK libspdk_nvmf.so 00:02:53.367 SO libspdk_ftl.so.9.0 00:02:53.625 SYMLINK libspdk_ftl.so 00:02:53.884 CC module/env_dpdk/env_dpdk_rpc.o 00:02:54.144 CC module/keyring/linux/keyring.o 00:02:54.144 CC module/sock/posix/posix.o 00:02:54.144 CC module/accel/error/accel_error.o 00:02:54.144 CC module/keyring/file/keyring.o 00:02:54.144 CC module/accel/dsa/accel_dsa.o 00:02:54.144 CC module/fsdev/aio/fsdev_aio.o 00:02:54.144 CC module/blob/bdev/blob_bdev.o 00:02:54.144 CC module/accel/ioat/accel_ioat.o 00:02:54.144 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:54.144 LIB libspdk_env_dpdk_rpc.a 00:02:54.144 SO libspdk_env_dpdk_rpc.so.6.0 00:02:54.144 SYMLINK libspdk_env_dpdk_rpc.so 00:02:54.144 CC module/accel/ioat/accel_ioat_rpc.o 00:02:54.144 CC module/keyring/file/keyring_rpc.o 00:02:54.144 CC module/keyring/linux/keyring_rpc.o 00:02:54.144 CC module/accel/error/accel_error_rpc.o 00:02:54.144 LIB libspdk_scheduler_dynamic.a 00:02:54.144 SO libspdk_scheduler_dynamic.so.4.0 00:02:54.403 LIB libspdk_accel_ioat.a 00:02:54.403 LIB libspdk_keyring_linux.a 00:02:54.403 LIB libspdk_keyring_file.a 00:02:54.403 SO libspdk_accel_ioat.so.6.0 00:02:54.403 LIB libspdk_blob_bdev.a 00:02:54.403 SYMLINK libspdk_scheduler_dynamic.so 00:02:54.403 SO libspdk_keyring_linux.so.1.0 00:02:54.403 SO libspdk_keyring_file.so.2.0 00:02:54.403 SO libspdk_blob_bdev.so.12.0 00:02:54.403 CC module/accel/iaa/accel_iaa.o 00:02:54.403 SYMLINK libspdk_accel_ioat.so 00:02:54.403 LIB libspdk_accel_error.a 00:02:54.403 CC module/accel/dsa/accel_dsa_rpc.o 00:02:54.403 SYMLINK libspdk_keyring_file.so 00:02:54.403 CC module/fsdev/aio/fsdev_aio_rpc.o 00:02:54.403 CC module/fsdev/aio/linux_aio_mgr.o 00:02:54.403 SYMLINK libspdk_keyring_linux.so 00:02:54.403 SO libspdk_accel_error.so.2.0 00:02:54.403 SYMLINK libspdk_blob_bdev.so 00:02:54.403 CC module/accel/iaa/accel_iaa_rpc.o 00:02:54.403 SYMLINK libspdk_accel_error.so 00:02:54.403 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:54.403 LIB libspdk_accel_dsa.a 00:02:54.663 SO libspdk_accel_dsa.so.5.0 00:02:54.663 LIB libspdk_accel_iaa.a 00:02:54.663 CC module/scheduler/gscheduler/gscheduler.o 00:02:54.663 SO libspdk_accel_iaa.so.3.0 00:02:54.663 SYMLINK libspdk_accel_dsa.so 00:02:54.663 LIB libspdk_scheduler_dpdk_governor.a 00:02:54.663 SYMLINK libspdk_accel_iaa.so 00:02:54.663 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:54.663 CC module/bdev/delay/vbdev_delay.o 00:02:54.663 CC module/bdev/error/vbdev_error.o 00:02:54.663 CC module/bdev/gpt/gpt.o 00:02:54.663 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:54.663 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:54.663 CC module/bdev/lvol/vbdev_lvol.o 00:02:54.663 LIB libspdk_scheduler_gscheduler.a 00:02:54.663 CC module/blobfs/bdev/blobfs_bdev.o 00:02:54.663 LIB libspdk_fsdev_aio.a 00:02:54.663 SO libspdk_scheduler_gscheduler.so.4.0 00:02:54.923 SO libspdk_fsdev_aio.so.1.0 00:02:54.923 LIB libspdk_sock_posix.a 00:02:54.923 CC module/bdev/malloc/bdev_malloc.o 00:02:54.923 SO libspdk_sock_posix.so.6.0 00:02:54.923 SYMLINK libspdk_scheduler_gscheduler.so 00:02:54.923 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:54.923 SYMLINK libspdk_fsdev_aio.so 00:02:54.923 CC module/bdev/gpt/vbdev_gpt.o 00:02:54.923 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:54.923 SYMLINK libspdk_sock_posix.so 00:02:54.923 CC module/bdev/error/vbdev_error_rpc.o 00:02:54.923 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:54.923 CC module/bdev/null/bdev_null.o 00:02:55.182 LIB libspdk_bdev_error.a 00:02:55.182 LIB libspdk_blobfs_bdev.a 00:02:55.182 LIB libspdk_bdev_delay.a 00:02:55.182 SO libspdk_bdev_error.so.6.0 00:02:55.182 SO libspdk_blobfs_bdev.so.6.0 00:02:55.182 CC module/bdev/nvme/bdev_nvme.o 00:02:55.182 SO libspdk_bdev_delay.so.6.0 00:02:55.182 LIB libspdk_bdev_gpt.a 00:02:55.182 CC module/bdev/passthru/vbdev_passthru.o 00:02:55.182 SYMLINK libspdk_bdev_error.so 00:02:55.182 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:55.182 SYMLINK libspdk_blobfs_bdev.so 00:02:55.182 SO libspdk_bdev_gpt.so.6.0 00:02:55.182 SYMLINK libspdk_bdev_delay.so 00:02:55.182 SYMLINK libspdk_bdev_gpt.so 00:02:55.182 LIB libspdk_bdev_lvol.a 00:02:55.182 SO libspdk_bdev_lvol.so.6.0 00:02:55.182 LIB libspdk_bdev_malloc.a 00:02:55.182 CC module/bdev/raid/bdev_raid.o 00:02:55.182 CC module/bdev/raid/bdev_raid_rpc.o 00:02:55.182 CC module/bdev/null/bdev_null_rpc.o 00:02:55.442 CC module/bdev/split/vbdev_split.o 00:02:55.442 SO libspdk_bdev_malloc.so.6.0 00:02:55.442 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:55.442 SYMLINK libspdk_bdev_lvol.so 00:02:55.442 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:55.442 CC module/bdev/aio/bdev_aio.o 00:02:55.442 SYMLINK libspdk_bdev_malloc.so 00:02:55.442 CC module/bdev/split/vbdev_split_rpc.o 00:02:55.442 LIB libspdk_bdev_passthru.a 00:02:55.442 SO libspdk_bdev_passthru.so.6.0 00:02:55.442 LIB libspdk_bdev_null.a 00:02:55.442 SO libspdk_bdev_null.so.6.0 00:02:55.442 CC module/bdev/aio/bdev_aio_rpc.o 00:02:55.442 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:55.442 SYMLINK libspdk_bdev_passthru.so 00:02:55.442 LIB libspdk_bdev_split.a 00:02:55.701 SYMLINK libspdk_bdev_null.so 00:02:55.701 SO libspdk_bdev_split.so.6.0 00:02:55.701 CC module/bdev/raid/bdev_raid_sb.o 00:02:55.701 SYMLINK libspdk_bdev_split.so 00:02:55.701 CC module/bdev/ftl/bdev_ftl.o 00:02:55.702 CC module/bdev/raid/raid0.o 00:02:55.702 LIB libspdk_bdev_zone_block.a 00:02:55.702 CC module/bdev/iscsi/bdev_iscsi.o 00:02:55.702 SO libspdk_bdev_zone_block.so.6.0 00:02:55.702 LIB libspdk_bdev_aio.a 00:02:55.702 SO libspdk_bdev_aio.so.6.0 00:02:55.702 SYMLINK libspdk_bdev_zone_block.so 00:02:55.702 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:55.702 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:55.702 SYMLINK libspdk_bdev_aio.so 00:02:55.702 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:55.961 CC module/bdev/raid/raid1.o 00:02:55.961 CC module/bdev/raid/concat.o 00:02:55.961 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:55.961 CC module/bdev/nvme/nvme_rpc.o 00:02:55.961 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:56.221 CC module/bdev/raid/raid5f.o 00:02:56.221 CC module/bdev/nvme/bdev_mdns_client.o 00:02:56.221 CC module/bdev/nvme/vbdev_opal.o 00:02:56.221 LIB libspdk_bdev_ftl.a 00:02:56.221 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:56.221 SO libspdk_bdev_ftl.so.6.0 00:02:56.221 LIB libspdk_bdev_iscsi.a 00:02:56.221 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:56.221 SO libspdk_bdev_iscsi.so.6.0 00:02:56.221 LIB libspdk_bdev_virtio.a 00:02:56.221 SYMLINK libspdk_bdev_ftl.so 00:02:56.221 SYMLINK libspdk_bdev_iscsi.so 00:02:56.482 SO libspdk_bdev_virtio.so.6.0 00:02:56.482 SYMLINK libspdk_bdev_virtio.so 00:02:56.743 LIB libspdk_bdev_raid.a 00:02:56.743 SO libspdk_bdev_raid.so.6.0 00:02:57.002 SYMLINK libspdk_bdev_raid.so 00:02:57.943 LIB libspdk_bdev_nvme.a 00:02:57.943 SO libspdk_bdev_nvme.so.7.1 00:02:57.943 SYMLINK libspdk_bdev_nvme.so 00:02:58.515 CC module/event/subsystems/iobuf/iobuf.o 00:02:58.515 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:58.515 CC module/event/subsystems/scheduler/scheduler.o 00:02:58.515 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:58.515 CC module/event/subsystems/fsdev/fsdev.o 00:02:58.515 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:58.515 CC module/event/subsystems/vmd/vmd.o 00:02:58.515 CC module/event/subsystems/sock/sock.o 00:02:58.515 CC module/event/subsystems/keyring/keyring.o 00:02:58.515 LIB libspdk_event_scheduler.a 00:02:58.515 LIB libspdk_event_vmd.a 00:02:58.515 LIB libspdk_event_sock.a 00:02:58.776 LIB libspdk_event_vhost_blk.a 00:02:58.776 LIB libspdk_event_iobuf.a 00:02:58.776 LIB libspdk_event_keyring.a 00:02:58.776 SO libspdk_event_sock.so.5.0 00:02:58.776 LIB libspdk_event_fsdev.a 00:02:58.776 SO libspdk_event_scheduler.so.4.0 00:02:58.776 SO libspdk_event_vmd.so.6.0 00:02:58.776 SO libspdk_event_vhost_blk.so.3.0 00:02:58.776 SO libspdk_event_keyring.so.1.0 00:02:58.776 SO libspdk_event_iobuf.so.3.0 00:02:58.776 SO libspdk_event_fsdev.so.1.0 00:02:58.776 SYMLINK libspdk_event_scheduler.so 00:02:58.776 SYMLINK libspdk_event_sock.so 00:02:58.776 SYMLINK libspdk_event_vmd.so 00:02:58.776 SYMLINK libspdk_event_vhost_blk.so 00:02:58.776 SYMLINK libspdk_event_keyring.so 00:02:58.776 SYMLINK libspdk_event_fsdev.so 00:02:58.776 SYMLINK libspdk_event_iobuf.so 00:02:59.036 CC module/event/subsystems/accel/accel.o 00:02:59.295 LIB libspdk_event_accel.a 00:02:59.295 SO libspdk_event_accel.so.6.0 00:02:59.296 SYMLINK libspdk_event_accel.so 00:02:59.864 CC module/event/subsystems/bdev/bdev.o 00:02:59.864 LIB libspdk_event_bdev.a 00:03:00.123 SO libspdk_event_bdev.so.6.0 00:03:00.123 SYMLINK libspdk_event_bdev.so 00:03:00.382 CC module/event/subsystems/nbd/nbd.o 00:03:00.382 CC module/event/subsystems/scsi/scsi.o 00:03:00.382 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:00.382 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:00.382 CC module/event/subsystems/ublk/ublk.o 00:03:00.642 LIB libspdk_event_nbd.a 00:03:00.642 SO libspdk_event_nbd.so.6.0 00:03:00.642 LIB libspdk_event_scsi.a 00:03:00.642 LIB libspdk_event_ublk.a 00:03:00.642 SO libspdk_event_ublk.so.3.0 00:03:00.642 SO libspdk_event_scsi.so.6.0 00:03:00.642 SYMLINK libspdk_event_nbd.so 00:03:00.642 LIB libspdk_event_nvmf.a 00:03:00.642 SYMLINK libspdk_event_ublk.so 00:03:00.642 SYMLINK libspdk_event_scsi.so 00:03:00.642 SO libspdk_event_nvmf.so.6.0 00:03:00.642 SYMLINK libspdk_event_nvmf.so 00:03:00.903 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:00.903 CC module/event/subsystems/iscsi/iscsi.o 00:03:01.162 LIB libspdk_event_vhost_scsi.a 00:03:01.162 SO libspdk_event_vhost_scsi.so.3.0 00:03:01.162 LIB libspdk_event_iscsi.a 00:03:01.162 SO libspdk_event_iscsi.so.6.0 00:03:01.162 SYMLINK libspdk_event_vhost_scsi.so 00:03:01.421 SYMLINK libspdk_event_iscsi.so 00:03:01.421 SO libspdk.so.6.0 00:03:01.421 SYMLINK libspdk.so 00:03:01.992 CXX app/trace/trace.o 00:03:01.992 CC app/spdk_lspci/spdk_lspci.o 00:03:01.992 CC app/trace_record/trace_record.o 00:03:01.992 CC app/spdk_nvme_identify/identify.o 00:03:01.992 CC app/spdk_nvme_perf/perf.o 00:03:01.992 CC app/iscsi_tgt/iscsi_tgt.o 00:03:01.992 CC app/nvmf_tgt/nvmf_main.o 00:03:01.992 CC app/spdk_tgt/spdk_tgt.o 00:03:01.992 CC examples/util/zipf/zipf.o 00:03:01.992 CC test/thread/poller_perf/poller_perf.o 00:03:01.992 LINK spdk_lspci 00:03:01.992 LINK nvmf_tgt 00:03:01.992 LINK poller_perf 00:03:01.992 LINK zipf 00:03:01.992 LINK iscsi_tgt 00:03:01.992 LINK spdk_trace_record 00:03:01.992 LINK spdk_tgt 00:03:02.252 LINK spdk_trace 00:03:02.252 CC app/spdk_nvme_discover/discovery_aer.o 00:03:02.252 CC app/spdk_top/spdk_top.o 00:03:02.252 CC examples/ioat/perf/perf.o 00:03:02.511 CC app/spdk_dd/spdk_dd.o 00:03:02.511 CC examples/vmd/lsvmd/lsvmd.o 00:03:02.511 CC test/dma/test_dma/test_dma.o 00:03:02.511 CC examples/ioat/verify/verify.o 00:03:02.511 LINK spdk_nvme_discover 00:03:02.511 LINK lsvmd 00:03:02.511 CC examples/idxd/perf/perf.o 00:03:02.511 LINK ioat_perf 00:03:02.771 LINK verify 00:03:02.771 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:02.771 LINK spdk_nvme_perf 00:03:02.771 LINK spdk_dd 00:03:02.771 LINK spdk_nvme_identify 00:03:02.771 CC examples/vmd/led/led.o 00:03:03.029 LINK interrupt_tgt 00:03:03.029 LINK idxd_perf 00:03:03.029 LINK test_dma 00:03:03.029 CC examples/thread/thread/thread_ex.o 00:03:03.029 LINK led 00:03:03.029 CC app/fio/nvme/fio_plugin.o 00:03:03.029 TEST_HEADER include/spdk/accel.h 00:03:03.029 TEST_HEADER include/spdk/accel_module.h 00:03:03.029 TEST_HEADER include/spdk/assert.h 00:03:03.029 TEST_HEADER include/spdk/barrier.h 00:03:03.029 TEST_HEADER include/spdk/base64.h 00:03:03.029 TEST_HEADER include/spdk/bdev.h 00:03:03.029 TEST_HEADER include/spdk/bdev_module.h 00:03:03.029 TEST_HEADER include/spdk/bdev_zone.h 00:03:03.029 TEST_HEADER include/spdk/bit_array.h 00:03:03.029 TEST_HEADER include/spdk/bit_pool.h 00:03:03.029 TEST_HEADER include/spdk/blob_bdev.h 00:03:03.029 CC app/vhost/vhost.o 00:03:03.029 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:03.029 TEST_HEADER include/spdk/blobfs.h 00:03:03.029 TEST_HEADER include/spdk/blob.h 00:03:03.029 TEST_HEADER include/spdk/conf.h 00:03:03.029 TEST_HEADER include/spdk/config.h 00:03:03.029 TEST_HEADER include/spdk/cpuset.h 00:03:03.029 TEST_HEADER include/spdk/crc16.h 00:03:03.029 TEST_HEADER include/spdk/crc32.h 00:03:03.029 TEST_HEADER include/spdk/crc64.h 00:03:03.029 TEST_HEADER include/spdk/dif.h 00:03:03.029 TEST_HEADER include/spdk/dma.h 00:03:03.029 TEST_HEADER include/spdk/endian.h 00:03:03.029 TEST_HEADER include/spdk/env_dpdk.h 00:03:03.029 TEST_HEADER include/spdk/env.h 00:03:03.029 TEST_HEADER include/spdk/event.h 00:03:03.029 TEST_HEADER include/spdk/fd_group.h 00:03:03.029 TEST_HEADER include/spdk/fd.h 00:03:03.029 TEST_HEADER include/spdk/file.h 00:03:03.029 TEST_HEADER include/spdk/fsdev.h 00:03:03.029 TEST_HEADER include/spdk/fsdev_module.h 00:03:03.029 TEST_HEADER include/spdk/ftl.h 00:03:03.029 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:03.029 TEST_HEADER include/spdk/gpt_spec.h 00:03:03.029 TEST_HEADER include/spdk/hexlify.h 00:03:03.029 TEST_HEADER include/spdk/histogram_data.h 00:03:03.029 TEST_HEADER include/spdk/idxd.h 00:03:03.029 TEST_HEADER include/spdk/idxd_spec.h 00:03:03.029 TEST_HEADER include/spdk/init.h 00:03:03.029 TEST_HEADER include/spdk/ioat.h 00:03:03.029 TEST_HEADER include/spdk/ioat_spec.h 00:03:03.029 TEST_HEADER include/spdk/iscsi_spec.h 00:03:03.029 TEST_HEADER include/spdk/json.h 00:03:03.029 TEST_HEADER include/spdk/jsonrpc.h 00:03:03.029 TEST_HEADER include/spdk/keyring.h 00:03:03.029 TEST_HEADER include/spdk/keyring_module.h 00:03:03.029 TEST_HEADER include/spdk/likely.h 00:03:03.029 TEST_HEADER include/spdk/log.h 00:03:03.029 TEST_HEADER include/spdk/lvol.h 00:03:03.029 TEST_HEADER include/spdk/md5.h 00:03:03.029 TEST_HEADER include/spdk/memory.h 00:03:03.029 TEST_HEADER include/spdk/mmio.h 00:03:03.029 TEST_HEADER include/spdk/nbd.h 00:03:03.029 CC test/app/bdev_svc/bdev_svc.o 00:03:03.029 TEST_HEADER include/spdk/net.h 00:03:03.029 TEST_HEADER include/spdk/notify.h 00:03:03.297 TEST_HEADER include/spdk/nvme.h 00:03:03.297 TEST_HEADER include/spdk/nvme_intel.h 00:03:03.297 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:03.297 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:03.297 TEST_HEADER include/spdk/nvme_spec.h 00:03:03.297 TEST_HEADER include/spdk/nvme_zns.h 00:03:03.297 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:03.297 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:03.297 TEST_HEADER include/spdk/nvmf.h 00:03:03.297 TEST_HEADER include/spdk/nvmf_spec.h 00:03:03.297 TEST_HEADER include/spdk/nvmf_transport.h 00:03:03.297 TEST_HEADER include/spdk/opal.h 00:03:03.297 TEST_HEADER include/spdk/opal_spec.h 00:03:03.297 TEST_HEADER include/spdk/pci_ids.h 00:03:03.297 TEST_HEADER include/spdk/pipe.h 00:03:03.297 TEST_HEADER include/spdk/queue.h 00:03:03.297 TEST_HEADER include/spdk/reduce.h 00:03:03.297 TEST_HEADER include/spdk/rpc.h 00:03:03.297 TEST_HEADER include/spdk/scheduler.h 00:03:03.297 TEST_HEADER include/spdk/scsi.h 00:03:03.297 TEST_HEADER include/spdk/scsi_spec.h 00:03:03.297 TEST_HEADER include/spdk/sock.h 00:03:03.297 TEST_HEADER include/spdk/stdinc.h 00:03:03.297 LINK thread 00:03:03.297 CC test/app/histogram_perf/histogram_perf.o 00:03:03.297 TEST_HEADER include/spdk/string.h 00:03:03.297 TEST_HEADER include/spdk/thread.h 00:03:03.297 TEST_HEADER include/spdk/trace.h 00:03:03.297 TEST_HEADER include/spdk/trace_parser.h 00:03:03.297 TEST_HEADER include/spdk/tree.h 00:03:03.297 TEST_HEADER include/spdk/ublk.h 00:03:03.297 TEST_HEADER include/spdk/util.h 00:03:03.297 CC test/app/jsoncat/jsoncat.o 00:03:03.297 TEST_HEADER include/spdk/uuid.h 00:03:03.297 TEST_HEADER include/spdk/version.h 00:03:03.297 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:03.297 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:03.297 TEST_HEADER include/spdk/vhost.h 00:03:03.297 TEST_HEADER include/spdk/vmd.h 00:03:03.297 TEST_HEADER include/spdk/xor.h 00:03:03.297 TEST_HEADER include/spdk/zipf.h 00:03:03.297 CC app/fio/bdev/fio_plugin.o 00:03:03.297 CXX test/cpp_headers/accel.o 00:03:03.297 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:03.297 LINK vhost 00:03:03.297 LINK bdev_svc 00:03:03.297 LINK histogram_perf 00:03:03.297 LINK jsoncat 00:03:03.297 LINK spdk_top 00:03:03.297 CXX test/cpp_headers/accel_module.o 00:03:03.570 CXX test/cpp_headers/assert.o 00:03:03.570 CXX test/cpp_headers/barrier.o 00:03:03.570 CC examples/sock/hello_world/hello_sock.o 00:03:03.570 CXX test/cpp_headers/base64.o 00:03:03.570 LINK spdk_nvme 00:03:03.828 LINK nvme_fuzz 00:03:03.828 CXX test/cpp_headers/bdev.o 00:03:03.828 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:03.828 CC examples/accel/perf/accel_perf.o 00:03:03.828 CC examples/blob/hello_world/hello_blob.o 00:03:03.828 CC test/app/stub/stub.o 00:03:03.828 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:03.828 LINK spdk_bdev 00:03:03.828 CC examples/blob/cli/blobcli.o 00:03:03.828 LINK hello_sock 00:03:03.828 CXX test/cpp_headers/bdev_module.o 00:03:04.088 LINK stub 00:03:04.088 CXX test/cpp_headers/bdev_zone.o 00:03:04.088 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:04.088 LINK hello_fsdev 00:03:04.088 LINK hello_blob 00:03:04.089 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:04.089 CXX test/cpp_headers/bit_array.o 00:03:04.348 CC test/env/mem_callbacks/mem_callbacks.o 00:03:04.348 CXX test/cpp_headers/bit_pool.o 00:03:04.348 CC test/env/vtophys/vtophys.o 00:03:04.348 CC examples/nvme/hello_world/hello_world.o 00:03:04.348 LINK accel_perf 00:03:04.348 CC examples/nvme/reconnect/reconnect.o 00:03:04.348 LINK blobcli 00:03:04.348 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:04.348 CXX test/cpp_headers/blob_bdev.o 00:03:04.348 LINK vtophys 00:03:04.606 CXX test/cpp_headers/blobfs_bdev.o 00:03:04.606 LINK hello_world 00:03:04.606 CXX test/cpp_headers/blobfs.o 00:03:04.606 LINK vhost_fuzz 00:03:04.606 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:04.606 CC examples/nvme/arbitration/arbitration.o 00:03:04.607 LINK reconnect 00:03:04.607 CXX test/cpp_headers/blob.o 00:03:04.888 CC examples/nvme/hotplug/hotplug.o 00:03:04.888 LINK mem_callbacks 00:03:04.888 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:04.888 LINK env_dpdk_post_init 00:03:04.888 CC examples/nvme/abort/abort.o 00:03:04.888 CXX test/cpp_headers/conf.o 00:03:04.888 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:04.888 LINK nvme_manage 00:03:04.888 LINK hotplug 00:03:05.147 LINK cmb_copy 00:03:05.147 CXX test/cpp_headers/config.o 00:03:05.147 LINK arbitration 00:03:05.147 CXX test/cpp_headers/cpuset.o 00:03:05.147 CC test/env/memory/memory_ut.o 00:03:05.147 CC examples/bdev/hello_world/hello_bdev.o 00:03:05.147 LINK pmr_persistence 00:03:05.147 CXX test/cpp_headers/crc16.o 00:03:05.147 CC test/env/pci/pci_ut.o 00:03:05.147 LINK abort 00:03:05.405 CC examples/bdev/bdevperf/bdevperf.o 00:03:05.405 CC test/nvme/aer/aer.o 00:03:05.405 CC test/event/event_perf/event_perf.o 00:03:05.405 LINK hello_bdev 00:03:05.405 CXX test/cpp_headers/crc32.o 00:03:05.405 CC test/rpc_client/rpc_client_test.o 00:03:05.405 CXX test/cpp_headers/crc64.o 00:03:05.663 LINK event_perf 00:03:05.663 CXX test/cpp_headers/dif.o 00:03:05.663 CXX test/cpp_headers/dma.o 00:03:05.663 LINK rpc_client_test 00:03:05.663 CC test/event/reactor/reactor.o 00:03:05.663 LINK aer 00:03:05.663 LINK pci_ut 00:03:05.663 CXX test/cpp_headers/endian.o 00:03:05.663 LINK reactor 00:03:05.921 LINK iscsi_fuzz 00:03:05.921 CC test/event/reactor_perf/reactor_perf.o 00:03:05.921 CC test/event/app_repeat/app_repeat.o 00:03:05.921 CXX test/cpp_headers/env_dpdk.o 00:03:05.921 CXX test/cpp_headers/env.o 00:03:05.921 CC test/nvme/reset/reset.o 00:03:05.921 LINK reactor_perf 00:03:05.921 CXX test/cpp_headers/event.o 00:03:05.921 CC test/accel/dif/dif.o 00:03:05.921 LINK app_repeat 00:03:06.180 CXX test/cpp_headers/fd_group.o 00:03:06.180 CXX test/cpp_headers/fd.o 00:03:06.180 CXX test/cpp_headers/file.o 00:03:06.180 CC test/event/scheduler/scheduler.o 00:03:06.180 LINK reset 00:03:06.180 CC test/blobfs/mkfs/mkfs.o 00:03:06.180 LINK bdevperf 00:03:06.438 CXX test/cpp_headers/fsdev.o 00:03:06.438 CC test/lvol/esnap/esnap.o 00:03:06.438 LINK memory_ut 00:03:06.438 CC test/nvme/sgl/sgl.o 00:03:06.438 CXX test/cpp_headers/fsdev_module.o 00:03:06.438 CC test/nvme/e2edp/nvme_dp.o 00:03:06.438 LINK scheduler 00:03:06.438 LINK mkfs 00:03:06.439 CXX test/cpp_headers/ftl.o 00:03:06.696 CXX test/cpp_headers/fuse_dispatcher.o 00:03:06.696 CC examples/nvmf/nvmf/nvmf.o 00:03:06.696 LINK sgl 00:03:06.696 CC test/nvme/overhead/overhead.o 00:03:06.696 LINK nvme_dp 00:03:06.696 CC test/nvme/reserve/reserve.o 00:03:06.696 CC test/nvme/err_injection/err_injection.o 00:03:06.696 CC test/nvme/startup/startup.o 00:03:06.696 CXX test/cpp_headers/gpt_spec.o 00:03:06.696 LINK dif 00:03:06.955 CXX test/cpp_headers/hexlify.o 00:03:06.955 CC test/nvme/simple_copy/simple_copy.o 00:03:06.955 LINK startup 00:03:06.955 LINK err_injection 00:03:06.955 LINK reserve 00:03:06.955 CC test/nvme/connect_stress/connect_stress.o 00:03:06.955 CXX test/cpp_headers/histogram_data.o 00:03:06.955 LINK nvmf 00:03:06.955 LINK overhead 00:03:07.215 CXX test/cpp_headers/idxd.o 00:03:07.215 LINK connect_stress 00:03:07.215 CXX test/cpp_headers/idxd_spec.o 00:03:07.215 LINK simple_copy 00:03:07.215 CXX test/cpp_headers/init.o 00:03:07.215 CC test/nvme/boot_partition/boot_partition.o 00:03:07.215 CC test/nvme/compliance/nvme_compliance.o 00:03:07.215 CC test/nvme/fused_ordering/fused_ordering.o 00:03:07.215 CC test/bdev/bdevio/bdevio.o 00:03:07.476 CXX test/cpp_headers/ioat.o 00:03:07.476 LINK boot_partition 00:03:07.476 CXX test/cpp_headers/ioat_spec.o 00:03:07.476 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:07.476 CC test/nvme/cuse/cuse.o 00:03:07.476 CC test/nvme/fdp/fdp.o 00:03:07.476 LINK fused_ordering 00:03:07.736 CXX test/cpp_headers/iscsi_spec.o 00:03:07.736 CXX test/cpp_headers/json.o 00:03:07.736 CXX test/cpp_headers/jsonrpc.o 00:03:07.736 LINK doorbell_aers 00:03:07.736 LINK nvme_compliance 00:03:07.736 CXX test/cpp_headers/keyring.o 00:03:07.736 LINK bdevio 00:03:07.736 CXX test/cpp_headers/keyring_module.o 00:03:07.736 CXX test/cpp_headers/likely.o 00:03:07.995 CXX test/cpp_headers/log.o 00:03:07.995 CXX test/cpp_headers/lvol.o 00:03:07.995 CXX test/cpp_headers/md5.o 00:03:07.995 CXX test/cpp_headers/memory.o 00:03:07.995 LINK fdp 00:03:07.995 CXX test/cpp_headers/mmio.o 00:03:07.995 CXX test/cpp_headers/nbd.o 00:03:07.995 CXX test/cpp_headers/net.o 00:03:07.995 CXX test/cpp_headers/notify.o 00:03:07.995 CXX test/cpp_headers/nvme.o 00:03:07.995 CXX test/cpp_headers/nvme_intel.o 00:03:07.995 CXX test/cpp_headers/nvme_ocssd.o 00:03:07.995 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:07.995 CXX test/cpp_headers/nvme_spec.o 00:03:08.255 CXX test/cpp_headers/nvme_zns.o 00:03:08.255 CXX test/cpp_headers/nvmf_cmd.o 00:03:08.255 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:08.255 CXX test/cpp_headers/nvmf.o 00:03:08.255 CXX test/cpp_headers/nvmf_spec.o 00:03:08.255 CXX test/cpp_headers/nvmf_transport.o 00:03:08.255 CXX test/cpp_headers/opal.o 00:03:08.255 CXX test/cpp_headers/opal_spec.o 00:03:08.255 CXX test/cpp_headers/pci_ids.o 00:03:08.255 CXX test/cpp_headers/pipe.o 00:03:08.514 CXX test/cpp_headers/queue.o 00:03:08.514 CXX test/cpp_headers/reduce.o 00:03:08.514 CXX test/cpp_headers/rpc.o 00:03:08.514 CXX test/cpp_headers/scheduler.o 00:03:08.514 CXX test/cpp_headers/scsi.o 00:03:08.514 CXX test/cpp_headers/scsi_spec.o 00:03:08.514 CXX test/cpp_headers/sock.o 00:03:08.514 CXX test/cpp_headers/stdinc.o 00:03:08.514 CXX test/cpp_headers/string.o 00:03:08.514 CXX test/cpp_headers/thread.o 00:03:08.514 CXX test/cpp_headers/trace.o 00:03:08.514 CXX test/cpp_headers/trace_parser.o 00:03:08.514 CXX test/cpp_headers/tree.o 00:03:08.514 CXX test/cpp_headers/ublk.o 00:03:08.514 CXX test/cpp_headers/util.o 00:03:08.514 CXX test/cpp_headers/uuid.o 00:03:08.773 CXX test/cpp_headers/version.o 00:03:08.773 CXX test/cpp_headers/vfio_user_pci.o 00:03:08.773 CXX test/cpp_headers/vfio_user_spec.o 00:03:08.773 CXX test/cpp_headers/vhost.o 00:03:08.773 CXX test/cpp_headers/vmd.o 00:03:08.773 CXX test/cpp_headers/xor.o 00:03:08.774 CXX test/cpp_headers/zipf.o 00:03:09.033 LINK cuse 00:03:12.327 LINK esnap 00:03:12.896 00:03:12.896 real 1m21.123s 00:03:12.897 user 7m6.266s 00:03:12.897 sys 1m31.081s 00:03:12.897 19:58:44 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:12.897 19:58:44 make -- common/autotest_common.sh@10 -- $ set +x 00:03:12.897 ************************************ 00:03:12.897 END TEST make 00:03:12.897 ************************************ 00:03:12.897 19:58:44 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:12.897 19:58:44 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:12.897 19:58:44 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:12.897 19:58:44 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:12.897 19:58:44 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:12.897 19:58:44 -- pm/common@44 -- $ pid=5468 00:03:12.897 19:58:44 -- pm/common@50 -- $ kill -TERM 5468 00:03:12.897 19:58:44 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:12.897 19:58:44 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:12.897 19:58:44 -- pm/common@44 -- $ pid=5470 00:03:12.897 19:58:44 -- pm/common@50 -- $ kill -TERM 5470 00:03:12.897 19:58:44 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:03:12.897 19:58:44 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:12.897 19:58:44 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:03:12.897 19:58:44 -- common/autotest_common.sh@1711 -- # lcov --version 00:03:12.897 19:58:44 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:03:12.897 19:58:44 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:03:12.897 19:58:44 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:12.897 19:58:44 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:12.897 19:58:44 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:12.897 19:58:44 -- scripts/common.sh@336 -- # IFS=.-: 00:03:12.897 19:58:44 -- scripts/common.sh@336 -- # read -ra ver1 00:03:12.897 19:58:44 -- scripts/common.sh@337 -- # IFS=.-: 00:03:12.897 19:58:44 -- scripts/common.sh@337 -- # read -ra ver2 00:03:12.897 19:58:44 -- scripts/common.sh@338 -- # local 'op=<' 00:03:12.897 19:58:44 -- scripts/common.sh@340 -- # ver1_l=2 00:03:12.897 19:58:44 -- scripts/common.sh@341 -- # ver2_l=1 00:03:12.897 19:58:44 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:12.897 19:58:44 -- scripts/common.sh@344 -- # case "$op" in 00:03:12.897 19:58:44 -- scripts/common.sh@345 -- # : 1 00:03:12.897 19:58:44 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:12.897 19:58:44 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:12.897 19:58:44 -- scripts/common.sh@365 -- # decimal 1 00:03:12.897 19:58:44 -- scripts/common.sh@353 -- # local d=1 00:03:12.897 19:58:44 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:12.897 19:58:44 -- scripts/common.sh@355 -- # echo 1 00:03:12.897 19:58:44 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:12.897 19:58:44 -- scripts/common.sh@366 -- # decimal 2 00:03:12.897 19:58:44 -- scripts/common.sh@353 -- # local d=2 00:03:12.897 19:58:44 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:12.897 19:58:44 -- scripts/common.sh@355 -- # echo 2 00:03:12.897 19:58:44 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:12.897 19:58:44 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:12.897 19:58:44 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:12.897 19:58:44 -- scripts/common.sh@368 -- # return 0 00:03:12.897 19:58:44 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:12.897 19:58:44 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:03:12.897 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:12.897 --rc genhtml_branch_coverage=1 00:03:12.897 --rc genhtml_function_coverage=1 00:03:12.897 --rc genhtml_legend=1 00:03:12.897 --rc geninfo_all_blocks=1 00:03:12.897 --rc geninfo_unexecuted_blocks=1 00:03:12.897 00:03:12.897 ' 00:03:12.897 19:58:44 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:03:12.897 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:12.897 --rc genhtml_branch_coverage=1 00:03:12.897 --rc genhtml_function_coverage=1 00:03:12.897 --rc genhtml_legend=1 00:03:12.897 --rc geninfo_all_blocks=1 00:03:12.897 --rc geninfo_unexecuted_blocks=1 00:03:12.897 00:03:12.897 ' 00:03:12.897 19:58:44 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:03:12.897 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:12.897 --rc genhtml_branch_coverage=1 00:03:12.897 --rc genhtml_function_coverage=1 00:03:12.897 --rc genhtml_legend=1 00:03:12.897 --rc geninfo_all_blocks=1 00:03:12.897 --rc geninfo_unexecuted_blocks=1 00:03:12.897 00:03:12.897 ' 00:03:12.897 19:58:44 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:03:12.897 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:12.897 --rc genhtml_branch_coverage=1 00:03:12.897 --rc genhtml_function_coverage=1 00:03:12.897 --rc genhtml_legend=1 00:03:12.897 --rc geninfo_all_blocks=1 00:03:12.897 --rc geninfo_unexecuted_blocks=1 00:03:12.897 00:03:12.897 ' 00:03:12.897 19:58:44 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:12.897 19:58:44 -- nvmf/common.sh@7 -- # uname -s 00:03:12.897 19:58:44 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:12.897 19:58:44 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:12.897 19:58:44 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:12.897 19:58:44 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:12.897 19:58:44 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:12.897 19:58:44 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:12.897 19:58:44 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:12.897 19:58:44 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:12.897 19:58:44 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:12.897 19:58:44 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:13.157 19:58:44 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ef0d36f9-96c7-4fe2-b5c9-cb1956b56ec5 00:03:13.157 19:58:44 -- nvmf/common.sh@18 -- # NVME_HOSTID=ef0d36f9-96c7-4fe2-b5c9-cb1956b56ec5 00:03:13.157 19:58:44 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:13.157 19:58:44 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:13.157 19:58:44 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:13.157 19:58:44 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:13.157 19:58:44 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:13.157 19:58:44 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:13.157 19:58:44 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:13.157 19:58:44 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:13.157 19:58:44 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:13.157 19:58:44 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:13.158 19:58:44 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:13.158 19:58:44 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:13.158 19:58:44 -- paths/export.sh@5 -- # export PATH 00:03:13.158 19:58:44 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:13.158 19:58:44 -- nvmf/common.sh@51 -- # : 0 00:03:13.158 19:58:44 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:13.158 19:58:44 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:13.158 19:58:44 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:13.158 19:58:44 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:13.158 19:58:44 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:13.158 19:58:44 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:13.158 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:13.158 19:58:44 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:13.158 19:58:44 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:13.158 19:58:44 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:13.158 19:58:44 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:13.158 19:58:44 -- spdk/autotest.sh@32 -- # uname -s 00:03:13.158 19:58:44 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:13.158 19:58:44 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:13.158 19:58:44 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:13.158 19:58:44 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:13.158 19:58:44 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:13.158 19:58:44 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:13.158 19:58:44 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:13.158 19:58:44 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:13.158 19:58:44 -- spdk/autotest.sh@48 -- # udevadm_pid=54386 00:03:13.158 19:58:44 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:13.158 19:58:44 -- pm/common@17 -- # local monitor 00:03:13.158 19:58:44 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:13.158 19:58:44 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:13.158 19:58:44 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:13.158 19:58:44 -- pm/common@25 -- # sleep 1 00:03:13.158 19:58:44 -- pm/common@21 -- # date +%s 00:03:13.158 19:58:44 -- pm/common@21 -- # date +%s 00:03:13.158 19:58:44 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733687924 00:03:13.158 19:58:44 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733687924 00:03:13.158 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733687924_collect-vmstat.pm.log 00:03:13.158 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733687924_collect-cpu-load.pm.log 00:03:14.096 19:58:45 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:14.096 19:58:45 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:14.096 19:58:45 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:14.096 19:58:45 -- common/autotest_common.sh@10 -- # set +x 00:03:14.096 19:58:45 -- spdk/autotest.sh@59 -- # create_test_list 00:03:14.096 19:58:45 -- common/autotest_common.sh@752 -- # xtrace_disable 00:03:14.096 19:58:45 -- common/autotest_common.sh@10 -- # set +x 00:03:14.096 19:58:46 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:14.096 19:58:46 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:14.096 19:58:46 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:14.096 19:58:46 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:14.096 19:58:46 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:14.096 19:58:46 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:14.096 19:58:46 -- common/autotest_common.sh@1457 -- # uname 00:03:14.096 19:58:46 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:03:14.096 19:58:46 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:14.096 19:58:46 -- common/autotest_common.sh@1477 -- # uname 00:03:14.096 19:58:46 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:03:14.097 19:58:46 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:14.097 19:58:46 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:14.354 lcov: LCOV version 1.15 00:03:14.354 19:58:46 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:29.258 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:29.258 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:03:44.152 19:59:15 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:03:44.152 19:59:15 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:44.152 19:59:15 -- common/autotest_common.sh@10 -- # set +x 00:03:44.152 19:59:15 -- spdk/autotest.sh@78 -- # rm -f 00:03:44.152 19:59:15 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:44.152 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:44.152 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:03:44.152 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:03:44.152 19:59:16 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:03:44.152 19:59:16 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:03:44.152 19:59:16 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:03:44.152 19:59:16 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:03:44.152 19:59:16 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:03:44.152 19:59:16 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:03:44.152 19:59:16 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:03:44.152 19:59:16 -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:03:44.152 19:59:16 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:03:44.152 19:59:16 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:03:44.152 19:59:16 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:03:44.152 19:59:16 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:44.152 19:59:16 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:44.152 19:59:16 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:03:44.152 19:59:16 -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:03:44.152 19:59:16 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:03:44.152 19:59:16 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:03:44.152 19:59:16 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:03:44.152 19:59:16 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:44.152 19:59:16 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:44.152 19:59:16 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:03:44.152 19:59:16 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n2 00:03:44.152 19:59:16 -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:03:44.152 19:59:16 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:03:44.152 19:59:16 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:44.152 19:59:16 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:03:44.152 19:59:16 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n3 00:03:44.152 19:59:16 -- common/autotest_common.sh@1650 -- # local device=nvme1n3 00:03:44.411 19:59:16 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:03:44.411 19:59:16 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:44.411 19:59:16 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:03:44.411 19:59:16 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:44.411 19:59:16 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:44.411 19:59:16 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:03:44.411 19:59:16 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:03:44.411 19:59:16 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:44.411 No valid GPT data, bailing 00:03:44.411 19:59:16 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:44.411 19:59:16 -- scripts/common.sh@394 -- # pt= 00:03:44.411 19:59:16 -- scripts/common.sh@395 -- # return 1 00:03:44.411 19:59:16 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:44.411 1+0 records in 00:03:44.411 1+0 records out 00:03:44.411 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00600302 s, 175 MB/s 00:03:44.411 19:59:16 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:44.411 19:59:16 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:44.411 19:59:16 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:03:44.411 19:59:16 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:03:44.411 19:59:16 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:03:44.411 No valid GPT data, bailing 00:03:44.411 19:59:16 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:03:44.411 19:59:16 -- scripts/common.sh@394 -- # pt= 00:03:44.411 19:59:16 -- scripts/common.sh@395 -- # return 1 00:03:44.411 19:59:16 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:03:44.411 1+0 records in 00:03:44.411 1+0 records out 00:03:44.411 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00621062 s, 169 MB/s 00:03:44.411 19:59:16 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:44.411 19:59:16 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:44.411 19:59:16 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:03:44.411 19:59:16 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:03:44.411 19:59:16 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:03:44.411 No valid GPT data, bailing 00:03:44.411 19:59:16 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:03:44.411 19:59:16 -- scripts/common.sh@394 -- # pt= 00:03:44.411 19:59:16 -- scripts/common.sh@395 -- # return 1 00:03:44.411 19:59:16 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:03:44.411 1+0 records in 00:03:44.411 1+0 records out 00:03:44.411 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00641813 s, 163 MB/s 00:03:44.411 19:59:16 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:44.411 19:59:16 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:44.411 19:59:16 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:03:44.411 19:59:16 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:03:44.411 19:59:16 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:03:44.670 No valid GPT data, bailing 00:03:44.670 19:59:16 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:03:44.670 19:59:16 -- scripts/common.sh@394 -- # pt= 00:03:44.670 19:59:16 -- scripts/common.sh@395 -- # return 1 00:03:44.670 19:59:16 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:03:44.670 1+0 records in 00:03:44.670 1+0 records out 00:03:44.670 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00622579 s, 168 MB/s 00:03:44.670 19:59:16 -- spdk/autotest.sh@105 -- # sync 00:03:44.670 19:59:16 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:44.670 19:59:16 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:44.670 19:59:16 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:47.204 19:59:19 -- spdk/autotest.sh@111 -- # uname -s 00:03:47.204 19:59:19 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:03:47.204 19:59:19 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:03:47.204 19:59:19 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:03:48.143 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:48.143 Hugepages 00:03:48.143 node hugesize free / total 00:03:48.143 node0 1048576kB 0 / 0 00:03:48.143 node0 2048kB 0 / 0 00:03:48.143 00:03:48.143 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:48.143 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:03:48.403 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:03:48.403 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:03:48.403 19:59:20 -- spdk/autotest.sh@117 -- # uname -s 00:03:48.403 19:59:20 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:03:48.403 19:59:20 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:03:48.403 19:59:20 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:49.343 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:49.343 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:03:49.343 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:03:49.343 19:59:21 -- common/autotest_common.sh@1517 -- # sleep 1 00:03:50.729 19:59:22 -- common/autotest_common.sh@1518 -- # bdfs=() 00:03:50.729 19:59:22 -- common/autotest_common.sh@1518 -- # local bdfs 00:03:50.729 19:59:22 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:03:50.729 19:59:22 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:03:50.729 19:59:22 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:50.729 19:59:22 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:50.729 19:59:22 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:50.729 19:59:22 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:03:50.729 19:59:22 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:50.729 19:59:22 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:03:50.729 19:59:22 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:03:50.729 19:59:22 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:50.996 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:50.996 Waiting for block devices as requested 00:03:50.996 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:03:51.277 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:03:51.277 19:59:23 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:03:51.277 19:59:23 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:03:51.277 19:59:23 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:03:51.277 19:59:23 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:03:51.277 19:59:23 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:03:51.277 19:59:23 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:03:51.277 19:59:23 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:03:51.277 19:59:23 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:03:51.277 19:59:23 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:03:51.277 19:59:23 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:03:51.277 19:59:23 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:03:51.277 19:59:23 -- common/autotest_common.sh@1531 -- # grep oacs 00:03:51.277 19:59:23 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:03:51.277 19:59:23 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:03:51.277 19:59:23 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:03:51.277 19:59:23 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:03:51.277 19:59:23 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:03:51.277 19:59:23 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:03:51.277 19:59:23 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:03:51.277 19:59:23 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:03:51.277 19:59:23 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:03:51.277 19:59:23 -- common/autotest_common.sh@1543 -- # continue 00:03:51.277 19:59:23 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:03:51.277 19:59:23 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:03:51.277 19:59:23 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:03:51.277 19:59:23 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:03:51.278 19:59:23 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:03:51.278 19:59:23 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:03:51.278 19:59:23 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:03:51.278 19:59:23 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:03:51.278 19:59:23 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:03:51.278 19:59:23 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:03:51.278 19:59:23 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:03:51.278 19:59:23 -- common/autotest_common.sh@1531 -- # grep oacs 00:03:51.278 19:59:23 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:03:51.278 19:59:23 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:03:51.278 19:59:23 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:03:51.278 19:59:23 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:03:51.278 19:59:23 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:03:51.278 19:59:23 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:03:51.278 19:59:23 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:03:51.278 19:59:23 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:03:51.278 19:59:23 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:03:51.278 19:59:23 -- common/autotest_common.sh@1543 -- # continue 00:03:51.278 19:59:23 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:03:51.278 19:59:23 -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:51.278 19:59:23 -- common/autotest_common.sh@10 -- # set +x 00:03:51.553 19:59:23 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:03:51.553 19:59:23 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:51.553 19:59:23 -- common/autotest_common.sh@10 -- # set +x 00:03:51.553 19:59:23 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:52.136 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:52.394 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:03:52.394 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:03:52.394 19:59:24 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:03:52.394 19:59:24 -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:52.394 19:59:24 -- common/autotest_common.sh@10 -- # set +x 00:03:52.394 19:59:24 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:03:52.394 19:59:24 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:03:52.394 19:59:24 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:03:52.394 19:59:24 -- common/autotest_common.sh@1563 -- # bdfs=() 00:03:52.394 19:59:24 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:03:52.394 19:59:24 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:03:52.394 19:59:24 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:03:52.652 19:59:24 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:03:52.652 19:59:24 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:52.652 19:59:24 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:52.652 19:59:24 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:52.652 19:59:24 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:03:52.652 19:59:24 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:52.652 19:59:24 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:03:52.652 19:59:24 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:03:52.652 19:59:24 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:03:52.652 19:59:24 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:03:52.652 19:59:24 -- common/autotest_common.sh@1566 -- # device=0x0010 00:03:52.652 19:59:24 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:03:52.652 19:59:24 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:03:52.652 19:59:24 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:03:52.652 19:59:24 -- common/autotest_common.sh@1566 -- # device=0x0010 00:03:52.652 19:59:24 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:03:52.652 19:59:24 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:03:52.652 19:59:24 -- common/autotest_common.sh@1572 -- # return 0 00:03:52.652 19:59:24 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:03:52.652 19:59:24 -- common/autotest_common.sh@1580 -- # return 0 00:03:52.652 19:59:24 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:03:52.652 19:59:24 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:03:52.652 19:59:24 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:52.652 19:59:24 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:52.652 19:59:24 -- spdk/autotest.sh@149 -- # timing_enter lib 00:03:52.652 19:59:24 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:52.652 19:59:24 -- common/autotest_common.sh@10 -- # set +x 00:03:52.652 19:59:24 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:03:52.652 19:59:24 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:03:52.652 19:59:24 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:52.652 19:59:24 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:52.652 19:59:24 -- common/autotest_common.sh@10 -- # set +x 00:03:52.652 ************************************ 00:03:52.652 START TEST env 00:03:52.652 ************************************ 00:03:52.652 19:59:24 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:03:52.652 * Looking for test storage... 00:03:52.652 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:03:52.652 19:59:24 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:03:52.652 19:59:24 env -- common/autotest_common.sh@1711 -- # lcov --version 00:03:52.652 19:59:24 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:03:52.912 19:59:24 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:03:52.912 19:59:24 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:52.912 19:59:24 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:52.912 19:59:24 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:52.912 19:59:24 env -- scripts/common.sh@336 -- # IFS=.-: 00:03:52.912 19:59:24 env -- scripts/common.sh@336 -- # read -ra ver1 00:03:52.912 19:59:24 env -- scripts/common.sh@337 -- # IFS=.-: 00:03:52.912 19:59:24 env -- scripts/common.sh@337 -- # read -ra ver2 00:03:52.912 19:59:24 env -- scripts/common.sh@338 -- # local 'op=<' 00:03:52.912 19:59:24 env -- scripts/common.sh@340 -- # ver1_l=2 00:03:52.912 19:59:24 env -- scripts/common.sh@341 -- # ver2_l=1 00:03:52.912 19:59:24 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:52.912 19:59:24 env -- scripts/common.sh@344 -- # case "$op" in 00:03:52.912 19:59:24 env -- scripts/common.sh@345 -- # : 1 00:03:52.912 19:59:24 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:52.912 19:59:24 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:52.912 19:59:24 env -- scripts/common.sh@365 -- # decimal 1 00:03:52.912 19:59:24 env -- scripts/common.sh@353 -- # local d=1 00:03:52.912 19:59:24 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:52.912 19:59:24 env -- scripts/common.sh@355 -- # echo 1 00:03:52.912 19:59:24 env -- scripts/common.sh@365 -- # ver1[v]=1 00:03:52.912 19:59:24 env -- scripts/common.sh@366 -- # decimal 2 00:03:52.912 19:59:24 env -- scripts/common.sh@353 -- # local d=2 00:03:52.912 19:59:24 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:52.912 19:59:24 env -- scripts/common.sh@355 -- # echo 2 00:03:52.912 19:59:24 env -- scripts/common.sh@366 -- # ver2[v]=2 00:03:52.912 19:59:24 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:52.912 19:59:24 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:52.912 19:59:24 env -- scripts/common.sh@368 -- # return 0 00:03:52.912 19:59:24 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:52.912 19:59:24 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:03:52.912 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:52.912 --rc genhtml_branch_coverage=1 00:03:52.912 --rc genhtml_function_coverage=1 00:03:52.912 --rc genhtml_legend=1 00:03:52.912 --rc geninfo_all_blocks=1 00:03:52.912 --rc geninfo_unexecuted_blocks=1 00:03:52.912 00:03:52.912 ' 00:03:52.912 19:59:24 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:03:52.912 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:52.912 --rc genhtml_branch_coverage=1 00:03:52.912 --rc genhtml_function_coverage=1 00:03:52.912 --rc genhtml_legend=1 00:03:52.912 --rc geninfo_all_blocks=1 00:03:52.912 --rc geninfo_unexecuted_blocks=1 00:03:52.912 00:03:52.912 ' 00:03:52.912 19:59:24 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:03:52.912 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:52.912 --rc genhtml_branch_coverage=1 00:03:52.912 --rc genhtml_function_coverage=1 00:03:52.912 --rc genhtml_legend=1 00:03:52.912 --rc geninfo_all_blocks=1 00:03:52.912 --rc geninfo_unexecuted_blocks=1 00:03:52.912 00:03:52.912 ' 00:03:52.912 19:59:24 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:03:52.912 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:52.912 --rc genhtml_branch_coverage=1 00:03:52.912 --rc genhtml_function_coverage=1 00:03:52.912 --rc genhtml_legend=1 00:03:52.912 --rc geninfo_all_blocks=1 00:03:52.912 --rc geninfo_unexecuted_blocks=1 00:03:52.912 00:03:52.912 ' 00:03:52.912 19:59:24 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:03:52.912 19:59:24 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:52.912 19:59:24 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:52.912 19:59:24 env -- common/autotest_common.sh@10 -- # set +x 00:03:52.912 ************************************ 00:03:52.912 START TEST env_memory 00:03:52.912 ************************************ 00:03:52.912 19:59:24 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:03:52.912 00:03:52.912 00:03:52.912 CUnit - A unit testing framework for C - Version 2.1-3 00:03:52.912 http://cunit.sourceforge.net/ 00:03:52.912 00:03:52.912 00:03:52.912 Suite: memory 00:03:52.912 Test: alloc and free memory map ...[2024-12-08 19:59:24.790126] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:03:52.912 passed 00:03:52.912 Test: mem map translation ...[2024-12-08 19:59:24.832822] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:03:52.912 [2024-12-08 19:59:24.832861] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:03:52.912 [2024-12-08 19:59:24.832918] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:03:52.912 [2024-12-08 19:59:24.832937] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:03:53.171 passed 00:03:53.171 Test: mem map registration ...[2024-12-08 19:59:24.899157] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:03:53.171 [2024-12-08 19:59:24.899204] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:03:53.171 passed 00:03:53.171 Test: mem map adjacent registrations ...passed 00:03:53.171 00:03:53.171 Run Summary: Type Total Ran Passed Failed Inactive 00:03:53.171 suites 1 1 n/a 0 0 00:03:53.171 tests 4 4 4 0 0 00:03:53.171 asserts 152 152 152 0 n/a 00:03:53.171 00:03:53.171 Elapsed time = 0.235 seconds 00:03:53.171 00:03:53.171 real 0m0.284s 00:03:53.171 user 0m0.252s 00:03:53.171 sys 0m0.023s 00:03:53.171 19:59:25 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:53.171 19:59:25 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:03:53.171 ************************************ 00:03:53.171 END TEST env_memory 00:03:53.172 ************************************ 00:03:53.172 19:59:25 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:03:53.172 19:59:25 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:53.172 19:59:25 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:53.172 19:59:25 env -- common/autotest_common.sh@10 -- # set +x 00:03:53.172 ************************************ 00:03:53.172 START TEST env_vtophys 00:03:53.172 ************************************ 00:03:53.172 19:59:25 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:03:53.172 EAL: lib.eal log level changed from notice to debug 00:03:53.172 EAL: Detected lcore 0 as core 0 on socket 0 00:03:53.172 EAL: Detected lcore 1 as core 0 on socket 0 00:03:53.172 EAL: Detected lcore 2 as core 0 on socket 0 00:03:53.172 EAL: Detected lcore 3 as core 0 on socket 0 00:03:53.172 EAL: Detected lcore 4 as core 0 on socket 0 00:03:53.172 EAL: Detected lcore 5 as core 0 on socket 0 00:03:53.172 EAL: Detected lcore 6 as core 0 on socket 0 00:03:53.172 EAL: Detected lcore 7 as core 0 on socket 0 00:03:53.172 EAL: Detected lcore 8 as core 0 on socket 0 00:03:53.172 EAL: Detected lcore 9 as core 0 on socket 0 00:03:53.172 EAL: Maximum logical cores by configuration: 128 00:03:53.172 EAL: Detected CPU lcores: 10 00:03:53.172 EAL: Detected NUMA nodes: 1 00:03:53.172 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:03:53.172 EAL: Detected shared linkage of DPDK 00:03:53.172 EAL: No shared files mode enabled, IPC will be disabled 00:03:53.430 EAL: Selected IOVA mode 'PA' 00:03:53.430 EAL: Probing VFIO support... 00:03:53.430 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:03:53.430 EAL: VFIO modules not loaded, skipping VFIO support... 00:03:53.430 EAL: Ask a virtual area of 0x2e000 bytes 00:03:53.430 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:03:53.430 EAL: Setting up physically contiguous memory... 00:03:53.430 EAL: Setting maximum number of open files to 524288 00:03:53.430 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:03:53.430 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:03:53.430 EAL: Ask a virtual area of 0x61000 bytes 00:03:53.430 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:03:53.430 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:53.430 EAL: Ask a virtual area of 0x400000000 bytes 00:03:53.430 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:03:53.430 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:03:53.430 EAL: Ask a virtual area of 0x61000 bytes 00:03:53.430 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:03:53.430 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:53.430 EAL: Ask a virtual area of 0x400000000 bytes 00:03:53.430 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:03:53.430 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:03:53.430 EAL: Ask a virtual area of 0x61000 bytes 00:03:53.430 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:03:53.431 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:53.431 EAL: Ask a virtual area of 0x400000000 bytes 00:03:53.431 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:03:53.431 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:03:53.431 EAL: Ask a virtual area of 0x61000 bytes 00:03:53.431 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:03:53.431 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:53.431 EAL: Ask a virtual area of 0x400000000 bytes 00:03:53.431 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:03:53.431 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:03:53.431 EAL: Hugepages will be freed exactly as allocated. 00:03:53.431 EAL: No shared files mode enabled, IPC is disabled 00:03:53.431 EAL: No shared files mode enabled, IPC is disabled 00:03:53.431 EAL: TSC frequency is ~2290000 KHz 00:03:53.431 EAL: Main lcore 0 is ready (tid=7faf6fd18a40;cpuset=[0]) 00:03:53.431 EAL: Trying to obtain current memory policy. 00:03:53.431 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:53.431 EAL: Restoring previous memory policy: 0 00:03:53.431 EAL: request: mp_malloc_sync 00:03:53.431 EAL: No shared files mode enabled, IPC is disabled 00:03:53.431 EAL: Heap on socket 0 was expanded by 2MB 00:03:53.431 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:03:53.431 EAL: No PCI address specified using 'addr=' in: bus=pci 00:03:53.431 EAL: Mem event callback 'spdk:(nil)' registered 00:03:53.431 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:03:53.431 00:03:53.431 00:03:53.431 CUnit - A unit testing framework for C - Version 2.1-3 00:03:53.431 http://cunit.sourceforge.net/ 00:03:53.431 00:03:53.431 00:03:53.431 Suite: components_suite 00:03:53.690 Test: vtophys_malloc_test ...passed 00:03:53.690 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:03:53.690 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:53.690 EAL: Restoring previous memory policy: 4 00:03:53.690 EAL: Calling mem event callback 'spdk:(nil)' 00:03:53.690 EAL: request: mp_malloc_sync 00:03:53.690 EAL: No shared files mode enabled, IPC is disabled 00:03:53.690 EAL: Heap on socket 0 was expanded by 4MB 00:03:53.690 EAL: Calling mem event callback 'spdk:(nil)' 00:03:53.690 EAL: request: mp_malloc_sync 00:03:53.690 EAL: No shared files mode enabled, IPC is disabled 00:03:53.690 EAL: Heap on socket 0 was shrunk by 4MB 00:03:53.690 EAL: Trying to obtain current memory policy. 00:03:53.690 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:53.690 EAL: Restoring previous memory policy: 4 00:03:53.690 EAL: Calling mem event callback 'spdk:(nil)' 00:03:53.690 EAL: request: mp_malloc_sync 00:03:53.690 EAL: No shared files mode enabled, IPC is disabled 00:03:53.690 EAL: Heap on socket 0 was expanded by 6MB 00:03:53.690 EAL: Calling mem event callback 'spdk:(nil)' 00:03:53.690 EAL: request: mp_malloc_sync 00:03:53.690 EAL: No shared files mode enabled, IPC is disabled 00:03:53.690 EAL: Heap on socket 0 was shrunk by 6MB 00:03:53.690 EAL: Trying to obtain current memory policy. 00:03:53.690 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:53.690 EAL: Restoring previous memory policy: 4 00:03:53.690 EAL: Calling mem event callback 'spdk:(nil)' 00:03:53.690 EAL: request: mp_malloc_sync 00:03:53.690 EAL: No shared files mode enabled, IPC is disabled 00:03:53.690 EAL: Heap on socket 0 was expanded by 10MB 00:03:53.690 EAL: Calling mem event callback 'spdk:(nil)' 00:03:53.690 EAL: request: mp_malloc_sync 00:03:53.690 EAL: No shared files mode enabled, IPC is disabled 00:03:53.690 EAL: Heap on socket 0 was shrunk by 10MB 00:03:53.989 EAL: Trying to obtain current memory policy. 00:03:53.989 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:53.989 EAL: Restoring previous memory policy: 4 00:03:53.989 EAL: Calling mem event callback 'spdk:(nil)' 00:03:53.989 EAL: request: mp_malloc_sync 00:03:53.989 EAL: No shared files mode enabled, IPC is disabled 00:03:53.989 EAL: Heap on socket 0 was expanded by 18MB 00:03:53.989 EAL: Calling mem event callback 'spdk:(nil)' 00:03:53.989 EAL: request: mp_malloc_sync 00:03:53.989 EAL: No shared files mode enabled, IPC is disabled 00:03:53.989 EAL: Heap on socket 0 was shrunk by 18MB 00:03:53.989 EAL: Trying to obtain current memory policy. 00:03:53.989 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:53.989 EAL: Restoring previous memory policy: 4 00:03:53.989 EAL: Calling mem event callback 'spdk:(nil)' 00:03:53.989 EAL: request: mp_malloc_sync 00:03:53.989 EAL: No shared files mode enabled, IPC is disabled 00:03:53.989 EAL: Heap on socket 0 was expanded by 34MB 00:03:53.989 EAL: Calling mem event callback 'spdk:(nil)' 00:03:53.989 EAL: request: mp_malloc_sync 00:03:53.989 EAL: No shared files mode enabled, IPC is disabled 00:03:53.989 EAL: Heap on socket 0 was shrunk by 34MB 00:03:53.989 EAL: Trying to obtain current memory policy. 00:03:53.990 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:53.990 EAL: Restoring previous memory policy: 4 00:03:53.990 EAL: Calling mem event callback 'spdk:(nil)' 00:03:53.990 EAL: request: mp_malloc_sync 00:03:53.990 EAL: No shared files mode enabled, IPC is disabled 00:03:53.990 EAL: Heap on socket 0 was expanded by 66MB 00:03:54.249 EAL: Calling mem event callback 'spdk:(nil)' 00:03:54.249 EAL: request: mp_malloc_sync 00:03:54.249 EAL: No shared files mode enabled, IPC is disabled 00:03:54.249 EAL: Heap on socket 0 was shrunk by 66MB 00:03:54.249 EAL: Trying to obtain current memory policy. 00:03:54.249 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:54.249 EAL: Restoring previous memory policy: 4 00:03:54.249 EAL: Calling mem event callback 'spdk:(nil)' 00:03:54.249 EAL: request: mp_malloc_sync 00:03:54.249 EAL: No shared files mode enabled, IPC is disabled 00:03:54.249 EAL: Heap on socket 0 was expanded by 130MB 00:03:54.508 EAL: Calling mem event callback 'spdk:(nil)' 00:03:54.508 EAL: request: mp_malloc_sync 00:03:54.508 EAL: No shared files mode enabled, IPC is disabled 00:03:54.508 EAL: Heap on socket 0 was shrunk by 130MB 00:03:54.766 EAL: Trying to obtain current memory policy. 00:03:54.766 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:54.766 EAL: Restoring previous memory policy: 4 00:03:54.766 EAL: Calling mem event callback 'spdk:(nil)' 00:03:54.766 EAL: request: mp_malloc_sync 00:03:54.767 EAL: No shared files mode enabled, IPC is disabled 00:03:54.767 EAL: Heap on socket 0 was expanded by 258MB 00:03:55.334 EAL: Calling mem event callback 'spdk:(nil)' 00:03:55.334 EAL: request: mp_malloc_sync 00:03:55.334 EAL: No shared files mode enabled, IPC is disabled 00:03:55.334 EAL: Heap on socket 0 was shrunk by 258MB 00:03:55.592 EAL: Trying to obtain current memory policy. 00:03:55.592 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:55.851 EAL: Restoring previous memory policy: 4 00:03:55.851 EAL: Calling mem event callback 'spdk:(nil)' 00:03:55.851 EAL: request: mp_malloc_sync 00:03:55.851 EAL: No shared files mode enabled, IPC is disabled 00:03:55.851 EAL: Heap on socket 0 was expanded by 514MB 00:03:56.791 EAL: Calling mem event callback 'spdk:(nil)' 00:03:56.791 EAL: request: mp_malloc_sync 00:03:56.791 EAL: No shared files mode enabled, IPC is disabled 00:03:56.791 EAL: Heap on socket 0 was shrunk by 514MB 00:03:57.732 EAL: Trying to obtain current memory policy. 00:03:57.732 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:57.732 EAL: Restoring previous memory policy: 4 00:03:57.732 EAL: Calling mem event callback 'spdk:(nil)' 00:03:57.732 EAL: request: mp_malloc_sync 00:03:57.732 EAL: No shared files mode enabled, IPC is disabled 00:03:57.732 EAL: Heap on socket 0 was expanded by 1026MB 00:03:59.643 EAL: Calling mem event callback 'spdk:(nil)' 00:03:59.643 EAL: request: mp_malloc_sync 00:03:59.643 EAL: No shared files mode enabled, IPC is disabled 00:03:59.643 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:01.551 passed 00:04:01.551 00:04:01.551 Run Summary: Type Total Ran Passed Failed Inactive 00:04:01.551 suites 1 1 n/a 0 0 00:04:01.551 tests 2 2 2 0 0 00:04:01.551 asserts 5481 5481 5481 0 n/a 00:04:01.551 00:04:01.551 Elapsed time = 7.866 seconds 00:04:01.551 EAL: Calling mem event callback 'spdk:(nil)' 00:04:01.551 EAL: request: mp_malloc_sync 00:04:01.551 EAL: No shared files mode enabled, IPC is disabled 00:04:01.551 EAL: Heap on socket 0 was shrunk by 2MB 00:04:01.551 EAL: No shared files mode enabled, IPC is disabled 00:04:01.551 EAL: No shared files mode enabled, IPC is disabled 00:04:01.551 EAL: No shared files mode enabled, IPC is disabled 00:04:01.551 00:04:01.551 real 0m8.189s 00:04:01.551 user 0m7.254s 00:04:01.551 sys 0m0.779s 00:04:01.551 19:59:33 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:01.551 19:59:33 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:01.551 ************************************ 00:04:01.551 END TEST env_vtophys 00:04:01.551 ************************************ 00:04:01.551 19:59:33 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:01.551 19:59:33 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:01.551 19:59:33 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:01.551 19:59:33 env -- common/autotest_common.sh@10 -- # set +x 00:04:01.551 ************************************ 00:04:01.551 START TEST env_pci 00:04:01.551 ************************************ 00:04:01.551 19:59:33 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:01.551 00:04:01.551 00:04:01.551 CUnit - A unit testing framework for C - Version 2.1-3 00:04:01.551 http://cunit.sourceforge.net/ 00:04:01.551 00:04:01.551 00:04:01.551 Suite: pci 00:04:01.551 Test: pci_hook ...[2024-12-08 19:59:33.358415] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56685 has claimed it 00:04:01.551 passed 00:04:01.551 00:04:01.551 Run Summary: Type Total Ran Passed Failed Inactive 00:04:01.551 suites 1 1 n/a 0 0 00:04:01.551 tests 1 1 1 0 0 00:04:01.551 asserts 25 25 25 0 n/a 00:04:01.551 00:04:01.551 Elapsed time = 0.006 seconds 00:04:01.551 EAL: Cannot find device (10000:00:01.0) 00:04:01.551 EAL: Failed to attach device on primary process 00:04:01.551 00:04:01.551 real 0m0.104s 00:04:01.551 user 0m0.044s 00:04:01.551 sys 0m0.059s 00:04:01.551 19:59:33 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:01.551 19:59:33 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:01.551 ************************************ 00:04:01.551 END TEST env_pci 00:04:01.551 ************************************ 00:04:01.551 19:59:33 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:01.551 19:59:33 env -- env/env.sh@15 -- # uname 00:04:01.551 19:59:33 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:01.551 19:59:33 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:01.551 19:59:33 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:01.551 19:59:33 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:04:01.551 19:59:33 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:01.551 19:59:33 env -- common/autotest_common.sh@10 -- # set +x 00:04:01.551 ************************************ 00:04:01.551 START TEST env_dpdk_post_init 00:04:01.551 ************************************ 00:04:01.551 19:59:33 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:01.811 EAL: Detected CPU lcores: 10 00:04:01.811 EAL: Detected NUMA nodes: 1 00:04:01.811 EAL: Detected shared linkage of DPDK 00:04:01.811 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:01.811 EAL: Selected IOVA mode 'PA' 00:04:01.811 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:01.811 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:04:01.811 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:04:01.811 Starting DPDK initialization... 00:04:01.811 Starting SPDK post initialization... 00:04:01.811 SPDK NVMe probe 00:04:01.811 Attaching to 0000:00:10.0 00:04:01.811 Attaching to 0000:00:11.0 00:04:01.811 Attached to 0000:00:10.0 00:04:01.811 Attached to 0000:00:11.0 00:04:01.811 Cleaning up... 00:04:01.811 ************************************ 00:04:01.811 END TEST env_dpdk_post_init 00:04:01.811 ************************************ 00:04:01.811 00:04:01.811 real 0m0.293s 00:04:01.811 user 0m0.100s 00:04:01.811 sys 0m0.093s 00:04:01.811 19:59:33 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:01.811 19:59:33 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:02.071 19:59:33 env -- env/env.sh@26 -- # uname 00:04:02.071 19:59:33 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:02.071 19:59:33 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:02.071 19:59:33 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:02.071 19:59:33 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:02.071 19:59:33 env -- common/autotest_common.sh@10 -- # set +x 00:04:02.071 ************************************ 00:04:02.071 START TEST env_mem_callbacks 00:04:02.071 ************************************ 00:04:02.071 19:59:33 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:02.071 EAL: Detected CPU lcores: 10 00:04:02.071 EAL: Detected NUMA nodes: 1 00:04:02.071 EAL: Detected shared linkage of DPDK 00:04:02.071 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:02.071 EAL: Selected IOVA mode 'PA' 00:04:02.071 00:04:02.071 00:04:02.071 CUnit - A unit testing framework for C - Version 2.1-3 00:04:02.071 http://cunit.sourceforge.net/ 00:04:02.071 00:04:02.071 00:04:02.071 Suite: memory 00:04:02.071 Test: test ... 00:04:02.071 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:02.071 register 0x200000200000 2097152 00:04:02.071 malloc 3145728 00:04:02.071 register 0x200000400000 4194304 00:04:02.071 buf 0x2000004fffc0 len 3145728 PASSED 00:04:02.071 malloc 64 00:04:02.071 buf 0x2000004ffec0 len 64 PASSED 00:04:02.071 malloc 4194304 00:04:02.071 register 0x200000800000 6291456 00:04:02.071 buf 0x2000009fffc0 len 4194304 PASSED 00:04:02.071 free 0x2000004fffc0 3145728 00:04:02.071 free 0x2000004ffec0 64 00:04:02.329 unregister 0x200000400000 4194304 PASSED 00:04:02.329 free 0x2000009fffc0 4194304 00:04:02.329 unregister 0x200000800000 6291456 PASSED 00:04:02.329 malloc 8388608 00:04:02.329 register 0x200000400000 10485760 00:04:02.329 buf 0x2000005fffc0 len 8388608 PASSED 00:04:02.329 free 0x2000005fffc0 8388608 00:04:02.329 unregister 0x200000400000 10485760 PASSED 00:04:02.329 passed 00:04:02.329 00:04:02.329 Run Summary: Type Total Ran Passed Failed Inactive 00:04:02.329 suites 1 1 n/a 0 0 00:04:02.329 tests 1 1 1 0 0 00:04:02.329 asserts 15 15 15 0 n/a 00:04:02.329 00:04:02.329 Elapsed time = 0.087 seconds 00:04:02.329 00:04:02.329 real 0m0.278s 00:04:02.329 user 0m0.112s 00:04:02.329 sys 0m0.063s 00:04:02.329 19:59:34 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:02.329 19:59:34 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:02.329 ************************************ 00:04:02.329 END TEST env_mem_callbacks 00:04:02.329 ************************************ 00:04:02.329 00:04:02.329 real 0m9.674s 00:04:02.329 user 0m7.954s 00:04:02.329 sys 0m1.356s 00:04:02.329 19:59:34 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:02.329 19:59:34 env -- common/autotest_common.sh@10 -- # set +x 00:04:02.329 ************************************ 00:04:02.329 END TEST env 00:04:02.329 ************************************ 00:04:02.329 19:59:34 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:02.329 19:59:34 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:02.329 19:59:34 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:02.329 19:59:34 -- common/autotest_common.sh@10 -- # set +x 00:04:02.329 ************************************ 00:04:02.329 START TEST rpc 00:04:02.329 ************************************ 00:04:02.329 19:59:34 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:02.588 * Looking for test storage... 00:04:02.588 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:02.588 19:59:34 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:02.588 19:59:34 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:02.588 19:59:34 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:02.588 19:59:34 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:02.588 19:59:34 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:02.588 19:59:34 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:02.588 19:59:34 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:02.588 19:59:34 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:02.588 19:59:34 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:02.588 19:59:34 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:02.588 19:59:34 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:02.588 19:59:34 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:02.588 19:59:34 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:02.588 19:59:34 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:02.588 19:59:34 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:02.588 19:59:34 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:02.588 19:59:34 rpc -- scripts/common.sh@345 -- # : 1 00:04:02.588 19:59:34 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:02.588 19:59:34 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:02.588 19:59:34 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:02.588 19:59:34 rpc -- scripts/common.sh@353 -- # local d=1 00:04:02.588 19:59:34 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:02.588 19:59:34 rpc -- scripts/common.sh@355 -- # echo 1 00:04:02.588 19:59:34 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:02.588 19:59:34 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:02.588 19:59:34 rpc -- scripts/common.sh@353 -- # local d=2 00:04:02.588 19:59:34 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:02.588 19:59:34 rpc -- scripts/common.sh@355 -- # echo 2 00:04:02.588 19:59:34 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:02.588 19:59:34 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:02.588 19:59:34 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:02.588 19:59:34 rpc -- scripts/common.sh@368 -- # return 0 00:04:02.588 19:59:34 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:02.588 19:59:34 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:02.588 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:02.588 --rc genhtml_branch_coverage=1 00:04:02.588 --rc genhtml_function_coverage=1 00:04:02.588 --rc genhtml_legend=1 00:04:02.588 --rc geninfo_all_blocks=1 00:04:02.588 --rc geninfo_unexecuted_blocks=1 00:04:02.588 00:04:02.588 ' 00:04:02.588 19:59:34 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:02.588 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:02.588 --rc genhtml_branch_coverage=1 00:04:02.588 --rc genhtml_function_coverage=1 00:04:02.588 --rc genhtml_legend=1 00:04:02.588 --rc geninfo_all_blocks=1 00:04:02.588 --rc geninfo_unexecuted_blocks=1 00:04:02.588 00:04:02.588 ' 00:04:02.588 19:59:34 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:02.588 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:02.588 --rc genhtml_branch_coverage=1 00:04:02.588 --rc genhtml_function_coverage=1 00:04:02.588 --rc genhtml_legend=1 00:04:02.588 --rc geninfo_all_blocks=1 00:04:02.588 --rc geninfo_unexecuted_blocks=1 00:04:02.588 00:04:02.588 ' 00:04:02.588 19:59:34 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:02.588 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:02.589 --rc genhtml_branch_coverage=1 00:04:02.589 --rc genhtml_function_coverage=1 00:04:02.589 --rc genhtml_legend=1 00:04:02.589 --rc geninfo_all_blocks=1 00:04:02.589 --rc geninfo_unexecuted_blocks=1 00:04:02.589 00:04:02.589 ' 00:04:02.589 19:59:34 rpc -- rpc/rpc.sh@65 -- # spdk_pid=56818 00:04:02.589 19:59:34 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:02.589 19:59:34 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:02.589 19:59:34 rpc -- rpc/rpc.sh@67 -- # waitforlisten 56818 00:04:02.589 19:59:34 rpc -- common/autotest_common.sh@835 -- # '[' -z 56818 ']' 00:04:02.589 19:59:34 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:02.589 19:59:34 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:02.589 19:59:34 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:02.589 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:02.589 19:59:34 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:02.589 19:59:34 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:02.589 [2024-12-08 19:59:34.543371] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:04:02.589 [2024-12-08 19:59:34.543503] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56818 ] 00:04:02.848 [2024-12-08 19:59:34.721655] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:03.107 [2024-12-08 19:59:34.837331] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:03.107 [2024-12-08 19:59:34.837395] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 56818' to capture a snapshot of events at runtime. 00:04:03.107 [2024-12-08 19:59:34.837404] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:03.107 [2024-12-08 19:59:34.837415] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:03.107 [2024-12-08 19:59:34.837423] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid56818 for offline analysis/debug. 00:04:03.107 [2024-12-08 19:59:34.838685] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:04.042 19:59:35 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:04.042 19:59:35 rpc -- common/autotest_common.sh@868 -- # return 0 00:04:04.042 19:59:35 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:04.042 19:59:35 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:04.042 19:59:35 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:04.042 19:59:35 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:04.042 19:59:35 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:04.042 19:59:35 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:04.042 19:59:35 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:04.042 ************************************ 00:04:04.042 START TEST rpc_integrity 00:04:04.042 ************************************ 00:04:04.042 19:59:35 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:04.042 19:59:35 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:04.042 19:59:35 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:04.042 19:59:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:04.042 19:59:35 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:04.042 19:59:35 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:04.042 19:59:35 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:04.042 19:59:35 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:04.042 19:59:35 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:04.042 19:59:35 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:04.042 19:59:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:04.042 19:59:35 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:04.042 19:59:35 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:04.042 19:59:35 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:04.042 19:59:35 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:04.042 19:59:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:04.042 19:59:35 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:04.042 19:59:35 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:04.042 { 00:04:04.042 "name": "Malloc0", 00:04:04.042 "aliases": [ 00:04:04.042 "53c8b609-e365-4d31-8af9-c39dae493479" 00:04:04.042 ], 00:04:04.042 "product_name": "Malloc disk", 00:04:04.042 "block_size": 512, 00:04:04.042 "num_blocks": 16384, 00:04:04.042 "uuid": "53c8b609-e365-4d31-8af9-c39dae493479", 00:04:04.042 "assigned_rate_limits": { 00:04:04.042 "rw_ios_per_sec": 0, 00:04:04.042 "rw_mbytes_per_sec": 0, 00:04:04.042 "r_mbytes_per_sec": 0, 00:04:04.042 "w_mbytes_per_sec": 0 00:04:04.042 }, 00:04:04.042 "claimed": false, 00:04:04.042 "zoned": false, 00:04:04.042 "supported_io_types": { 00:04:04.042 "read": true, 00:04:04.042 "write": true, 00:04:04.042 "unmap": true, 00:04:04.042 "flush": true, 00:04:04.042 "reset": true, 00:04:04.042 "nvme_admin": false, 00:04:04.042 "nvme_io": false, 00:04:04.042 "nvme_io_md": false, 00:04:04.042 "write_zeroes": true, 00:04:04.042 "zcopy": true, 00:04:04.042 "get_zone_info": false, 00:04:04.042 "zone_management": false, 00:04:04.042 "zone_append": false, 00:04:04.042 "compare": false, 00:04:04.042 "compare_and_write": false, 00:04:04.042 "abort": true, 00:04:04.042 "seek_hole": false, 00:04:04.042 "seek_data": false, 00:04:04.042 "copy": true, 00:04:04.042 "nvme_iov_md": false 00:04:04.042 }, 00:04:04.042 "memory_domains": [ 00:04:04.042 { 00:04:04.042 "dma_device_id": "system", 00:04:04.042 "dma_device_type": 1 00:04:04.042 }, 00:04:04.042 { 00:04:04.042 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:04.042 "dma_device_type": 2 00:04:04.042 } 00:04:04.042 ], 00:04:04.042 "driver_specific": {} 00:04:04.042 } 00:04:04.042 ]' 00:04:04.042 19:59:35 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:04.042 19:59:35 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:04.042 19:59:35 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:04.042 19:59:35 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:04.042 19:59:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:04.042 [2024-12-08 19:59:35.881193] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:04.042 [2024-12-08 19:59:35.881281] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:04.042 [2024-12-08 19:59:35.881324] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:04:04.042 [2024-12-08 19:59:35.881343] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:04.042 [2024-12-08 19:59:35.883620] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:04.042 [2024-12-08 19:59:35.883665] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:04.042 Passthru0 00:04:04.042 19:59:35 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:04.042 19:59:35 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:04.042 19:59:35 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:04.042 19:59:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:04.042 19:59:35 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:04.042 19:59:35 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:04.042 { 00:04:04.042 "name": "Malloc0", 00:04:04.042 "aliases": [ 00:04:04.042 "53c8b609-e365-4d31-8af9-c39dae493479" 00:04:04.042 ], 00:04:04.042 "product_name": "Malloc disk", 00:04:04.042 "block_size": 512, 00:04:04.042 "num_blocks": 16384, 00:04:04.042 "uuid": "53c8b609-e365-4d31-8af9-c39dae493479", 00:04:04.042 "assigned_rate_limits": { 00:04:04.042 "rw_ios_per_sec": 0, 00:04:04.042 "rw_mbytes_per_sec": 0, 00:04:04.042 "r_mbytes_per_sec": 0, 00:04:04.042 "w_mbytes_per_sec": 0 00:04:04.042 }, 00:04:04.042 "claimed": true, 00:04:04.042 "claim_type": "exclusive_write", 00:04:04.042 "zoned": false, 00:04:04.042 "supported_io_types": { 00:04:04.042 "read": true, 00:04:04.042 "write": true, 00:04:04.042 "unmap": true, 00:04:04.043 "flush": true, 00:04:04.043 "reset": true, 00:04:04.043 "nvme_admin": false, 00:04:04.043 "nvme_io": false, 00:04:04.043 "nvme_io_md": false, 00:04:04.043 "write_zeroes": true, 00:04:04.043 "zcopy": true, 00:04:04.043 "get_zone_info": false, 00:04:04.043 "zone_management": false, 00:04:04.043 "zone_append": false, 00:04:04.043 "compare": false, 00:04:04.043 "compare_and_write": false, 00:04:04.043 "abort": true, 00:04:04.043 "seek_hole": false, 00:04:04.043 "seek_data": false, 00:04:04.043 "copy": true, 00:04:04.043 "nvme_iov_md": false 00:04:04.043 }, 00:04:04.043 "memory_domains": [ 00:04:04.043 { 00:04:04.043 "dma_device_id": "system", 00:04:04.043 "dma_device_type": 1 00:04:04.043 }, 00:04:04.043 { 00:04:04.043 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:04.043 "dma_device_type": 2 00:04:04.043 } 00:04:04.043 ], 00:04:04.043 "driver_specific": {} 00:04:04.043 }, 00:04:04.043 { 00:04:04.043 "name": "Passthru0", 00:04:04.043 "aliases": [ 00:04:04.043 "42ea6814-4c5f-5c14-bc64-c68ab26178f1" 00:04:04.043 ], 00:04:04.043 "product_name": "passthru", 00:04:04.043 "block_size": 512, 00:04:04.043 "num_blocks": 16384, 00:04:04.043 "uuid": "42ea6814-4c5f-5c14-bc64-c68ab26178f1", 00:04:04.043 "assigned_rate_limits": { 00:04:04.043 "rw_ios_per_sec": 0, 00:04:04.043 "rw_mbytes_per_sec": 0, 00:04:04.043 "r_mbytes_per_sec": 0, 00:04:04.043 "w_mbytes_per_sec": 0 00:04:04.043 }, 00:04:04.043 "claimed": false, 00:04:04.043 "zoned": false, 00:04:04.043 "supported_io_types": { 00:04:04.043 "read": true, 00:04:04.043 "write": true, 00:04:04.043 "unmap": true, 00:04:04.043 "flush": true, 00:04:04.043 "reset": true, 00:04:04.043 "nvme_admin": false, 00:04:04.043 "nvme_io": false, 00:04:04.043 "nvme_io_md": false, 00:04:04.043 "write_zeroes": true, 00:04:04.043 "zcopy": true, 00:04:04.043 "get_zone_info": false, 00:04:04.043 "zone_management": false, 00:04:04.043 "zone_append": false, 00:04:04.043 "compare": false, 00:04:04.043 "compare_and_write": false, 00:04:04.043 "abort": true, 00:04:04.043 "seek_hole": false, 00:04:04.043 "seek_data": false, 00:04:04.043 "copy": true, 00:04:04.043 "nvme_iov_md": false 00:04:04.043 }, 00:04:04.043 "memory_domains": [ 00:04:04.043 { 00:04:04.043 "dma_device_id": "system", 00:04:04.043 "dma_device_type": 1 00:04:04.043 }, 00:04:04.043 { 00:04:04.043 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:04.043 "dma_device_type": 2 00:04:04.043 } 00:04:04.043 ], 00:04:04.043 "driver_specific": { 00:04:04.043 "passthru": { 00:04:04.043 "name": "Passthru0", 00:04:04.043 "base_bdev_name": "Malloc0" 00:04:04.043 } 00:04:04.043 } 00:04:04.043 } 00:04:04.043 ]' 00:04:04.043 19:59:35 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:04.043 19:59:35 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:04.043 19:59:35 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:04.043 19:59:35 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:04.043 19:59:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:04.043 19:59:35 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:04.043 19:59:35 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:04.043 19:59:35 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:04.043 19:59:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:04.043 19:59:36 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:04.043 19:59:36 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:04.043 19:59:36 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:04.043 19:59:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:04.043 19:59:36 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:04.043 19:59:36 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:04.301 19:59:36 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:04.302 19:59:36 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:04.302 00:04:04.302 real 0m0.341s 00:04:04.302 user 0m0.197s 00:04:04.302 sys 0m0.052s 00:04:04.302 19:59:36 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:04.302 19:59:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:04.302 ************************************ 00:04:04.302 END TEST rpc_integrity 00:04:04.302 ************************************ 00:04:04.302 19:59:36 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:04.302 19:59:36 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:04.302 19:59:36 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:04.302 19:59:36 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:04.302 ************************************ 00:04:04.302 START TEST rpc_plugins 00:04:04.302 ************************************ 00:04:04.302 19:59:36 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:04:04.302 19:59:36 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:04.302 19:59:36 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:04.302 19:59:36 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:04.302 19:59:36 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:04.302 19:59:36 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:04.302 19:59:36 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:04.302 19:59:36 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:04.302 19:59:36 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:04.302 19:59:36 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:04.302 19:59:36 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:04.302 { 00:04:04.302 "name": "Malloc1", 00:04:04.302 "aliases": [ 00:04:04.302 "a13a5156-0d43-4cdd-9b8d-2fb4cbb02a94" 00:04:04.302 ], 00:04:04.302 "product_name": "Malloc disk", 00:04:04.302 "block_size": 4096, 00:04:04.302 "num_blocks": 256, 00:04:04.302 "uuid": "a13a5156-0d43-4cdd-9b8d-2fb4cbb02a94", 00:04:04.302 "assigned_rate_limits": { 00:04:04.302 "rw_ios_per_sec": 0, 00:04:04.302 "rw_mbytes_per_sec": 0, 00:04:04.302 "r_mbytes_per_sec": 0, 00:04:04.302 "w_mbytes_per_sec": 0 00:04:04.302 }, 00:04:04.302 "claimed": false, 00:04:04.302 "zoned": false, 00:04:04.302 "supported_io_types": { 00:04:04.302 "read": true, 00:04:04.302 "write": true, 00:04:04.302 "unmap": true, 00:04:04.302 "flush": true, 00:04:04.302 "reset": true, 00:04:04.302 "nvme_admin": false, 00:04:04.302 "nvme_io": false, 00:04:04.302 "nvme_io_md": false, 00:04:04.302 "write_zeroes": true, 00:04:04.302 "zcopy": true, 00:04:04.302 "get_zone_info": false, 00:04:04.302 "zone_management": false, 00:04:04.302 "zone_append": false, 00:04:04.302 "compare": false, 00:04:04.302 "compare_and_write": false, 00:04:04.302 "abort": true, 00:04:04.302 "seek_hole": false, 00:04:04.302 "seek_data": false, 00:04:04.302 "copy": true, 00:04:04.302 "nvme_iov_md": false 00:04:04.302 }, 00:04:04.302 "memory_domains": [ 00:04:04.302 { 00:04:04.302 "dma_device_id": "system", 00:04:04.302 "dma_device_type": 1 00:04:04.302 }, 00:04:04.302 { 00:04:04.302 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:04.302 "dma_device_type": 2 00:04:04.302 } 00:04:04.302 ], 00:04:04.302 "driver_specific": {} 00:04:04.302 } 00:04:04.302 ]' 00:04:04.302 19:59:36 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:04.302 19:59:36 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:04.302 19:59:36 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:04.302 19:59:36 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:04.302 19:59:36 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:04.302 19:59:36 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:04.302 19:59:36 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:04.302 19:59:36 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:04.302 19:59:36 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:04.302 19:59:36 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:04.302 19:59:36 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:04.302 19:59:36 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:04.561 19:59:36 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:04.561 00:04:04.561 real 0m0.162s 00:04:04.561 user 0m0.092s 00:04:04.561 sys 0m0.031s 00:04:04.561 19:59:36 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:04.561 19:59:36 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:04.561 ************************************ 00:04:04.561 END TEST rpc_plugins 00:04:04.561 ************************************ 00:04:04.561 19:59:36 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:04.561 19:59:36 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:04.561 19:59:36 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:04.561 19:59:36 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:04.561 ************************************ 00:04:04.561 START TEST rpc_trace_cmd_test 00:04:04.561 ************************************ 00:04:04.561 19:59:36 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:04:04.561 19:59:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:04.561 19:59:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:04.561 19:59:36 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:04.561 19:59:36 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:04.561 19:59:36 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:04.561 19:59:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:04.561 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid56818", 00:04:04.561 "tpoint_group_mask": "0x8", 00:04:04.561 "iscsi_conn": { 00:04:04.561 "mask": "0x2", 00:04:04.561 "tpoint_mask": "0x0" 00:04:04.561 }, 00:04:04.561 "scsi": { 00:04:04.561 "mask": "0x4", 00:04:04.561 "tpoint_mask": "0x0" 00:04:04.561 }, 00:04:04.561 "bdev": { 00:04:04.561 "mask": "0x8", 00:04:04.561 "tpoint_mask": "0xffffffffffffffff" 00:04:04.561 }, 00:04:04.561 "nvmf_rdma": { 00:04:04.561 "mask": "0x10", 00:04:04.561 "tpoint_mask": "0x0" 00:04:04.561 }, 00:04:04.561 "nvmf_tcp": { 00:04:04.561 "mask": "0x20", 00:04:04.561 "tpoint_mask": "0x0" 00:04:04.561 }, 00:04:04.561 "ftl": { 00:04:04.561 "mask": "0x40", 00:04:04.561 "tpoint_mask": "0x0" 00:04:04.561 }, 00:04:04.561 "blobfs": { 00:04:04.561 "mask": "0x80", 00:04:04.561 "tpoint_mask": "0x0" 00:04:04.561 }, 00:04:04.561 "dsa": { 00:04:04.561 "mask": "0x200", 00:04:04.561 "tpoint_mask": "0x0" 00:04:04.561 }, 00:04:04.561 "thread": { 00:04:04.561 "mask": "0x400", 00:04:04.561 "tpoint_mask": "0x0" 00:04:04.561 }, 00:04:04.561 "nvme_pcie": { 00:04:04.561 "mask": "0x800", 00:04:04.561 "tpoint_mask": "0x0" 00:04:04.561 }, 00:04:04.561 "iaa": { 00:04:04.561 "mask": "0x1000", 00:04:04.561 "tpoint_mask": "0x0" 00:04:04.561 }, 00:04:04.561 "nvme_tcp": { 00:04:04.561 "mask": "0x2000", 00:04:04.561 "tpoint_mask": "0x0" 00:04:04.561 }, 00:04:04.561 "bdev_nvme": { 00:04:04.561 "mask": "0x4000", 00:04:04.561 "tpoint_mask": "0x0" 00:04:04.561 }, 00:04:04.561 "sock": { 00:04:04.561 "mask": "0x8000", 00:04:04.561 "tpoint_mask": "0x0" 00:04:04.561 }, 00:04:04.561 "blob": { 00:04:04.561 "mask": "0x10000", 00:04:04.561 "tpoint_mask": "0x0" 00:04:04.561 }, 00:04:04.561 "bdev_raid": { 00:04:04.561 "mask": "0x20000", 00:04:04.561 "tpoint_mask": "0x0" 00:04:04.561 }, 00:04:04.561 "scheduler": { 00:04:04.561 "mask": "0x40000", 00:04:04.561 "tpoint_mask": "0x0" 00:04:04.561 } 00:04:04.561 }' 00:04:04.561 19:59:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:04.561 19:59:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:04.561 19:59:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:04.561 19:59:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:04.561 19:59:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:04.561 19:59:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:04.561 19:59:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:04.561 19:59:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:04.561 19:59:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:04.820 19:59:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:04.820 00:04:04.820 real 0m0.222s 00:04:04.820 user 0m0.178s 00:04:04.820 sys 0m0.033s 00:04:04.820 19:59:36 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:04.820 19:59:36 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:04.820 ************************************ 00:04:04.820 END TEST rpc_trace_cmd_test 00:04:04.820 ************************************ 00:04:04.820 19:59:36 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:04.820 19:59:36 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:04.820 19:59:36 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:04.820 19:59:36 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:04.820 19:59:36 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:04.820 19:59:36 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:04.820 ************************************ 00:04:04.820 START TEST rpc_daemon_integrity 00:04:04.820 ************************************ 00:04:04.820 19:59:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:04.820 19:59:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:04.820 19:59:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:04.820 19:59:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:04.820 19:59:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:04.820 19:59:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:04.820 19:59:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:04.820 19:59:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:04.820 19:59:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:04.820 19:59:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:04.820 19:59:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:04.820 19:59:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:04.820 19:59:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:04.820 19:59:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:04.820 19:59:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:04.820 19:59:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:04.820 19:59:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:04.820 19:59:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:04.820 { 00:04:04.820 "name": "Malloc2", 00:04:04.820 "aliases": [ 00:04:04.820 "b57f4fd0-4a31-495a-a41f-d9aafcbb237c" 00:04:04.820 ], 00:04:04.820 "product_name": "Malloc disk", 00:04:04.820 "block_size": 512, 00:04:04.820 "num_blocks": 16384, 00:04:04.820 "uuid": "b57f4fd0-4a31-495a-a41f-d9aafcbb237c", 00:04:04.820 "assigned_rate_limits": { 00:04:04.820 "rw_ios_per_sec": 0, 00:04:04.820 "rw_mbytes_per_sec": 0, 00:04:04.820 "r_mbytes_per_sec": 0, 00:04:04.820 "w_mbytes_per_sec": 0 00:04:04.820 }, 00:04:04.820 "claimed": false, 00:04:04.820 "zoned": false, 00:04:04.820 "supported_io_types": { 00:04:04.820 "read": true, 00:04:04.820 "write": true, 00:04:04.820 "unmap": true, 00:04:04.820 "flush": true, 00:04:04.820 "reset": true, 00:04:04.820 "nvme_admin": false, 00:04:04.820 "nvme_io": false, 00:04:04.820 "nvme_io_md": false, 00:04:04.820 "write_zeroes": true, 00:04:04.820 "zcopy": true, 00:04:04.820 "get_zone_info": false, 00:04:04.820 "zone_management": false, 00:04:04.820 "zone_append": false, 00:04:04.820 "compare": false, 00:04:04.820 "compare_and_write": false, 00:04:04.820 "abort": true, 00:04:04.820 "seek_hole": false, 00:04:04.820 "seek_data": false, 00:04:04.820 "copy": true, 00:04:04.820 "nvme_iov_md": false 00:04:04.820 }, 00:04:04.820 "memory_domains": [ 00:04:04.820 { 00:04:04.820 "dma_device_id": "system", 00:04:04.820 "dma_device_type": 1 00:04:04.820 }, 00:04:04.820 { 00:04:04.820 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:04.820 "dma_device_type": 2 00:04:04.820 } 00:04:04.820 ], 00:04:04.820 "driver_specific": {} 00:04:04.820 } 00:04:04.820 ]' 00:04:04.820 19:59:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:04.820 19:59:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:04.820 19:59:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:04.820 19:59:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:04.820 19:59:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:04.820 [2024-12-08 19:59:36.788599] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:04.820 [2024-12-08 19:59:36.788666] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:04.820 [2024-12-08 19:59:36.788688] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:04:04.820 [2024-12-08 19:59:36.788700] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:04.820 [2024-12-08 19:59:36.790900] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:04.820 [2024-12-08 19:59:36.790942] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:04.820 Passthru0 00:04:04.820 19:59:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:04.820 19:59:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:04.820 19:59:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:04.820 19:59:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:05.079 19:59:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:05.079 19:59:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:05.079 { 00:04:05.079 "name": "Malloc2", 00:04:05.079 "aliases": [ 00:04:05.079 "b57f4fd0-4a31-495a-a41f-d9aafcbb237c" 00:04:05.079 ], 00:04:05.079 "product_name": "Malloc disk", 00:04:05.079 "block_size": 512, 00:04:05.079 "num_blocks": 16384, 00:04:05.079 "uuid": "b57f4fd0-4a31-495a-a41f-d9aafcbb237c", 00:04:05.079 "assigned_rate_limits": { 00:04:05.079 "rw_ios_per_sec": 0, 00:04:05.079 "rw_mbytes_per_sec": 0, 00:04:05.079 "r_mbytes_per_sec": 0, 00:04:05.079 "w_mbytes_per_sec": 0 00:04:05.079 }, 00:04:05.079 "claimed": true, 00:04:05.079 "claim_type": "exclusive_write", 00:04:05.079 "zoned": false, 00:04:05.079 "supported_io_types": { 00:04:05.079 "read": true, 00:04:05.079 "write": true, 00:04:05.079 "unmap": true, 00:04:05.079 "flush": true, 00:04:05.079 "reset": true, 00:04:05.079 "nvme_admin": false, 00:04:05.079 "nvme_io": false, 00:04:05.079 "nvme_io_md": false, 00:04:05.079 "write_zeroes": true, 00:04:05.079 "zcopy": true, 00:04:05.079 "get_zone_info": false, 00:04:05.079 "zone_management": false, 00:04:05.079 "zone_append": false, 00:04:05.079 "compare": false, 00:04:05.079 "compare_and_write": false, 00:04:05.079 "abort": true, 00:04:05.079 "seek_hole": false, 00:04:05.079 "seek_data": false, 00:04:05.079 "copy": true, 00:04:05.079 "nvme_iov_md": false 00:04:05.079 }, 00:04:05.079 "memory_domains": [ 00:04:05.079 { 00:04:05.079 "dma_device_id": "system", 00:04:05.079 "dma_device_type": 1 00:04:05.079 }, 00:04:05.079 { 00:04:05.079 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:05.079 "dma_device_type": 2 00:04:05.079 } 00:04:05.079 ], 00:04:05.079 "driver_specific": {} 00:04:05.079 }, 00:04:05.079 { 00:04:05.079 "name": "Passthru0", 00:04:05.079 "aliases": [ 00:04:05.079 "1cf8e640-3c58-5096-a851-171242ed491c" 00:04:05.079 ], 00:04:05.079 "product_name": "passthru", 00:04:05.079 "block_size": 512, 00:04:05.079 "num_blocks": 16384, 00:04:05.079 "uuid": "1cf8e640-3c58-5096-a851-171242ed491c", 00:04:05.079 "assigned_rate_limits": { 00:04:05.079 "rw_ios_per_sec": 0, 00:04:05.079 "rw_mbytes_per_sec": 0, 00:04:05.079 "r_mbytes_per_sec": 0, 00:04:05.079 "w_mbytes_per_sec": 0 00:04:05.079 }, 00:04:05.079 "claimed": false, 00:04:05.079 "zoned": false, 00:04:05.079 "supported_io_types": { 00:04:05.079 "read": true, 00:04:05.079 "write": true, 00:04:05.079 "unmap": true, 00:04:05.079 "flush": true, 00:04:05.079 "reset": true, 00:04:05.079 "nvme_admin": false, 00:04:05.079 "nvme_io": false, 00:04:05.079 "nvme_io_md": false, 00:04:05.079 "write_zeroes": true, 00:04:05.079 "zcopy": true, 00:04:05.079 "get_zone_info": false, 00:04:05.079 "zone_management": false, 00:04:05.079 "zone_append": false, 00:04:05.079 "compare": false, 00:04:05.079 "compare_and_write": false, 00:04:05.079 "abort": true, 00:04:05.079 "seek_hole": false, 00:04:05.079 "seek_data": false, 00:04:05.079 "copy": true, 00:04:05.079 "nvme_iov_md": false 00:04:05.079 }, 00:04:05.079 "memory_domains": [ 00:04:05.079 { 00:04:05.079 "dma_device_id": "system", 00:04:05.079 "dma_device_type": 1 00:04:05.079 }, 00:04:05.079 { 00:04:05.079 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:05.079 "dma_device_type": 2 00:04:05.079 } 00:04:05.079 ], 00:04:05.079 "driver_specific": { 00:04:05.079 "passthru": { 00:04:05.079 "name": "Passthru0", 00:04:05.079 "base_bdev_name": "Malloc2" 00:04:05.079 } 00:04:05.079 } 00:04:05.079 } 00:04:05.079 ]' 00:04:05.079 19:59:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:05.079 19:59:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:05.079 19:59:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:05.079 19:59:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:05.079 19:59:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:05.079 19:59:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:05.079 19:59:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:05.079 19:59:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:05.079 19:59:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:05.079 19:59:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:05.079 19:59:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:05.079 19:59:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:05.079 19:59:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:05.079 19:59:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:05.079 19:59:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:05.079 19:59:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:05.079 19:59:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:05.079 00:04:05.079 real 0m0.340s 00:04:05.079 user 0m0.182s 00:04:05.079 sys 0m0.057s 00:04:05.079 19:59:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:05.079 19:59:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:05.079 ************************************ 00:04:05.079 END TEST rpc_daemon_integrity 00:04:05.079 ************************************ 00:04:05.080 19:59:37 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:05.080 19:59:37 rpc -- rpc/rpc.sh@84 -- # killprocess 56818 00:04:05.080 19:59:37 rpc -- common/autotest_common.sh@954 -- # '[' -z 56818 ']' 00:04:05.080 19:59:37 rpc -- common/autotest_common.sh@958 -- # kill -0 56818 00:04:05.080 19:59:37 rpc -- common/autotest_common.sh@959 -- # uname 00:04:05.080 19:59:37 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:05.080 19:59:37 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 56818 00:04:05.339 19:59:37 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:05.339 19:59:37 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:05.339 19:59:37 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 56818' 00:04:05.339 killing process with pid 56818 00:04:05.339 19:59:37 rpc -- common/autotest_common.sh@973 -- # kill 56818 00:04:05.339 19:59:37 rpc -- common/autotest_common.sh@978 -- # wait 56818 00:04:07.889 00:04:07.889 real 0m5.243s 00:04:07.889 user 0m5.746s 00:04:07.889 sys 0m0.928s 00:04:07.889 19:59:39 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:07.889 19:59:39 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:07.889 ************************************ 00:04:07.889 END TEST rpc 00:04:07.889 ************************************ 00:04:07.889 19:59:39 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:07.889 19:59:39 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:07.889 19:59:39 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:07.889 19:59:39 -- common/autotest_common.sh@10 -- # set +x 00:04:07.889 ************************************ 00:04:07.889 START TEST skip_rpc 00:04:07.889 ************************************ 00:04:07.889 19:59:39 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:07.889 * Looking for test storage... 00:04:07.890 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:07.890 19:59:39 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:07.890 19:59:39 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:07.890 19:59:39 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:07.890 19:59:39 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:07.890 19:59:39 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:07.890 19:59:39 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:07.890 19:59:39 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:07.890 19:59:39 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:07.890 19:59:39 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:07.890 19:59:39 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:07.890 19:59:39 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:07.890 19:59:39 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:07.890 19:59:39 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:07.890 19:59:39 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:07.890 19:59:39 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:07.890 19:59:39 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:07.890 19:59:39 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:07.890 19:59:39 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:07.890 19:59:39 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:07.890 19:59:39 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:07.890 19:59:39 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:07.890 19:59:39 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:07.890 19:59:39 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:07.890 19:59:39 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:07.890 19:59:39 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:07.890 19:59:39 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:07.890 19:59:39 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:07.890 19:59:39 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:07.890 19:59:39 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:07.890 19:59:39 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:07.890 19:59:39 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:07.890 19:59:39 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:07.890 19:59:39 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:07.890 19:59:39 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:07.890 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.890 --rc genhtml_branch_coverage=1 00:04:07.890 --rc genhtml_function_coverage=1 00:04:07.890 --rc genhtml_legend=1 00:04:07.890 --rc geninfo_all_blocks=1 00:04:07.890 --rc geninfo_unexecuted_blocks=1 00:04:07.890 00:04:07.890 ' 00:04:07.890 19:59:39 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:07.890 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.890 --rc genhtml_branch_coverage=1 00:04:07.890 --rc genhtml_function_coverage=1 00:04:07.890 --rc genhtml_legend=1 00:04:07.890 --rc geninfo_all_blocks=1 00:04:07.890 --rc geninfo_unexecuted_blocks=1 00:04:07.890 00:04:07.890 ' 00:04:07.890 19:59:39 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:07.890 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.890 --rc genhtml_branch_coverage=1 00:04:07.890 --rc genhtml_function_coverage=1 00:04:07.890 --rc genhtml_legend=1 00:04:07.890 --rc geninfo_all_blocks=1 00:04:07.890 --rc geninfo_unexecuted_blocks=1 00:04:07.890 00:04:07.890 ' 00:04:07.890 19:59:39 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:07.890 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.890 --rc genhtml_branch_coverage=1 00:04:07.890 --rc genhtml_function_coverage=1 00:04:07.890 --rc genhtml_legend=1 00:04:07.890 --rc geninfo_all_blocks=1 00:04:07.890 --rc geninfo_unexecuted_blocks=1 00:04:07.890 00:04:07.890 ' 00:04:07.890 19:59:39 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:07.890 19:59:39 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:07.890 19:59:39 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:07.890 19:59:39 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:07.890 19:59:39 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:07.890 19:59:39 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:07.890 ************************************ 00:04:07.890 START TEST skip_rpc 00:04:07.890 ************************************ 00:04:07.890 19:59:39 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:04:07.890 19:59:39 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57047 00:04:07.890 19:59:39 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:07.890 19:59:39 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:07.890 19:59:39 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:07.890 [2024-12-08 19:59:39.854444] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:04:07.890 [2024-12-08 19:59:39.854570] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57047 ] 00:04:08.149 [2024-12-08 19:59:40.026299] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:08.408 [2024-12-08 19:59:40.135195] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:13.686 19:59:44 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:13.686 19:59:44 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:04:13.686 19:59:44 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:13.686 19:59:44 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:04:13.686 19:59:44 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:13.686 19:59:44 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:04:13.686 19:59:44 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:13.686 19:59:44 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:04:13.686 19:59:44 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:13.686 19:59:44 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:13.686 19:59:44 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:13.686 19:59:44 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:04:13.686 19:59:44 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:13.686 19:59:44 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:13.686 19:59:44 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:13.686 19:59:44 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:13.686 19:59:44 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57047 00:04:13.686 19:59:44 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 57047 ']' 00:04:13.686 19:59:44 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 57047 00:04:13.686 19:59:44 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:04:13.686 19:59:44 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:13.686 19:59:44 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57047 00:04:13.686 19:59:44 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:13.686 19:59:44 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:13.686 killing process with pid 57047 00:04:13.686 19:59:44 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57047' 00:04:13.686 19:59:44 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 57047 00:04:13.686 19:59:44 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 57047 00:04:15.588 00:04:15.588 real 0m7.462s 00:04:15.588 user 0m7.016s 00:04:15.588 sys 0m0.363s 00:04:15.588 19:59:47 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:15.588 19:59:47 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:15.588 ************************************ 00:04:15.588 END TEST skip_rpc 00:04:15.588 ************************************ 00:04:15.588 19:59:47 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:15.588 19:59:47 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:15.588 19:59:47 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:15.588 19:59:47 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:15.588 ************************************ 00:04:15.588 START TEST skip_rpc_with_json 00:04:15.589 ************************************ 00:04:15.589 19:59:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:04:15.589 19:59:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:15.589 19:59:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:15.589 19:59:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57151 00:04:15.589 19:59:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:15.589 19:59:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57151 00:04:15.589 19:59:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 57151 ']' 00:04:15.589 19:59:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:15.589 19:59:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:15.589 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:15.589 19:59:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:15.589 19:59:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:15.589 19:59:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:15.589 [2024-12-08 19:59:47.387413] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:04:15.589 [2024-12-08 19:59:47.387599] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57151 ] 00:04:15.848 [2024-12-08 19:59:47.573126] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:15.848 [2024-12-08 19:59:47.685664] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:16.784 19:59:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:16.784 19:59:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:04:16.784 19:59:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:16.784 19:59:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:16.784 19:59:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:16.784 [2024-12-08 19:59:48.557769] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:16.784 request: 00:04:16.784 { 00:04:16.784 "trtype": "tcp", 00:04:16.784 "method": "nvmf_get_transports", 00:04:16.784 "req_id": 1 00:04:16.784 } 00:04:16.784 Got JSON-RPC error response 00:04:16.784 response: 00:04:16.784 { 00:04:16.784 "code": -19, 00:04:16.784 "message": "No such device" 00:04:16.784 } 00:04:16.784 19:59:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:16.784 19:59:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:16.784 19:59:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:16.784 19:59:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:16.784 [2024-12-08 19:59:48.569882] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:16.784 19:59:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:16.784 19:59:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:16.784 19:59:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:16.784 19:59:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:16.784 19:59:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:16.784 19:59:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:16.784 { 00:04:16.784 "subsystems": [ 00:04:16.784 { 00:04:16.784 "subsystem": "fsdev", 00:04:16.784 "config": [ 00:04:16.784 { 00:04:16.784 "method": "fsdev_set_opts", 00:04:16.784 "params": { 00:04:16.784 "fsdev_io_pool_size": 65535, 00:04:16.784 "fsdev_io_cache_size": 256 00:04:16.784 } 00:04:16.784 } 00:04:16.784 ] 00:04:16.784 }, 00:04:16.784 { 00:04:16.784 "subsystem": "keyring", 00:04:16.784 "config": [] 00:04:16.784 }, 00:04:16.784 { 00:04:16.784 "subsystem": "iobuf", 00:04:16.784 "config": [ 00:04:16.784 { 00:04:16.784 "method": "iobuf_set_options", 00:04:16.784 "params": { 00:04:16.784 "small_pool_count": 8192, 00:04:16.784 "large_pool_count": 1024, 00:04:16.784 "small_bufsize": 8192, 00:04:16.784 "large_bufsize": 135168, 00:04:16.784 "enable_numa": false 00:04:16.784 } 00:04:16.784 } 00:04:16.784 ] 00:04:16.784 }, 00:04:16.784 { 00:04:16.784 "subsystem": "sock", 00:04:16.784 "config": [ 00:04:16.784 { 00:04:16.784 "method": "sock_set_default_impl", 00:04:16.784 "params": { 00:04:16.784 "impl_name": "posix" 00:04:16.784 } 00:04:16.784 }, 00:04:16.784 { 00:04:16.784 "method": "sock_impl_set_options", 00:04:16.784 "params": { 00:04:16.784 "impl_name": "ssl", 00:04:16.784 "recv_buf_size": 4096, 00:04:16.784 "send_buf_size": 4096, 00:04:16.784 "enable_recv_pipe": true, 00:04:16.784 "enable_quickack": false, 00:04:16.784 "enable_placement_id": 0, 00:04:16.784 "enable_zerocopy_send_server": true, 00:04:16.784 "enable_zerocopy_send_client": false, 00:04:16.784 "zerocopy_threshold": 0, 00:04:16.784 "tls_version": 0, 00:04:16.784 "enable_ktls": false 00:04:16.784 } 00:04:16.784 }, 00:04:16.784 { 00:04:16.784 "method": "sock_impl_set_options", 00:04:16.784 "params": { 00:04:16.784 "impl_name": "posix", 00:04:16.784 "recv_buf_size": 2097152, 00:04:16.784 "send_buf_size": 2097152, 00:04:16.784 "enable_recv_pipe": true, 00:04:16.784 "enable_quickack": false, 00:04:16.784 "enable_placement_id": 0, 00:04:16.784 "enable_zerocopy_send_server": true, 00:04:16.784 "enable_zerocopy_send_client": false, 00:04:16.784 "zerocopy_threshold": 0, 00:04:16.784 "tls_version": 0, 00:04:16.784 "enable_ktls": false 00:04:16.784 } 00:04:16.784 } 00:04:16.784 ] 00:04:16.784 }, 00:04:16.784 { 00:04:16.784 "subsystem": "vmd", 00:04:16.784 "config": [] 00:04:16.784 }, 00:04:16.784 { 00:04:16.784 "subsystem": "accel", 00:04:16.784 "config": [ 00:04:16.784 { 00:04:16.784 "method": "accel_set_options", 00:04:16.784 "params": { 00:04:16.784 "small_cache_size": 128, 00:04:16.784 "large_cache_size": 16, 00:04:16.784 "task_count": 2048, 00:04:16.784 "sequence_count": 2048, 00:04:16.784 "buf_count": 2048 00:04:16.784 } 00:04:16.784 } 00:04:16.784 ] 00:04:16.784 }, 00:04:16.784 { 00:04:16.784 "subsystem": "bdev", 00:04:16.784 "config": [ 00:04:16.785 { 00:04:16.785 "method": "bdev_set_options", 00:04:16.785 "params": { 00:04:16.785 "bdev_io_pool_size": 65535, 00:04:16.785 "bdev_io_cache_size": 256, 00:04:16.785 "bdev_auto_examine": true, 00:04:16.785 "iobuf_small_cache_size": 128, 00:04:16.785 "iobuf_large_cache_size": 16 00:04:16.785 } 00:04:16.785 }, 00:04:16.785 { 00:04:16.785 "method": "bdev_raid_set_options", 00:04:16.785 "params": { 00:04:16.785 "process_window_size_kb": 1024, 00:04:16.785 "process_max_bandwidth_mb_sec": 0 00:04:16.785 } 00:04:16.785 }, 00:04:16.785 { 00:04:16.785 "method": "bdev_iscsi_set_options", 00:04:16.785 "params": { 00:04:16.785 "timeout_sec": 30 00:04:16.785 } 00:04:16.785 }, 00:04:16.785 { 00:04:16.785 "method": "bdev_nvme_set_options", 00:04:16.785 "params": { 00:04:16.785 "action_on_timeout": "none", 00:04:16.785 "timeout_us": 0, 00:04:16.785 "timeout_admin_us": 0, 00:04:16.785 "keep_alive_timeout_ms": 10000, 00:04:16.785 "arbitration_burst": 0, 00:04:16.785 "low_priority_weight": 0, 00:04:16.785 "medium_priority_weight": 0, 00:04:16.785 "high_priority_weight": 0, 00:04:16.785 "nvme_adminq_poll_period_us": 10000, 00:04:16.785 "nvme_ioq_poll_period_us": 0, 00:04:16.785 "io_queue_requests": 0, 00:04:16.785 "delay_cmd_submit": true, 00:04:16.785 "transport_retry_count": 4, 00:04:16.785 "bdev_retry_count": 3, 00:04:16.785 "transport_ack_timeout": 0, 00:04:16.785 "ctrlr_loss_timeout_sec": 0, 00:04:16.785 "reconnect_delay_sec": 0, 00:04:16.785 "fast_io_fail_timeout_sec": 0, 00:04:16.785 "disable_auto_failback": false, 00:04:16.785 "generate_uuids": false, 00:04:16.785 "transport_tos": 0, 00:04:16.785 "nvme_error_stat": false, 00:04:16.785 "rdma_srq_size": 0, 00:04:16.785 "io_path_stat": false, 00:04:16.785 "allow_accel_sequence": false, 00:04:16.785 "rdma_max_cq_size": 0, 00:04:16.785 "rdma_cm_event_timeout_ms": 0, 00:04:16.785 "dhchap_digests": [ 00:04:16.785 "sha256", 00:04:16.785 "sha384", 00:04:16.785 "sha512" 00:04:16.785 ], 00:04:16.785 "dhchap_dhgroups": [ 00:04:16.785 "null", 00:04:16.785 "ffdhe2048", 00:04:16.785 "ffdhe3072", 00:04:16.785 "ffdhe4096", 00:04:16.785 "ffdhe6144", 00:04:16.785 "ffdhe8192" 00:04:16.785 ] 00:04:16.785 } 00:04:16.785 }, 00:04:16.785 { 00:04:16.785 "method": "bdev_nvme_set_hotplug", 00:04:16.785 "params": { 00:04:16.785 "period_us": 100000, 00:04:16.785 "enable": false 00:04:16.785 } 00:04:16.785 }, 00:04:16.785 { 00:04:16.785 "method": "bdev_wait_for_examine" 00:04:16.785 } 00:04:16.785 ] 00:04:16.785 }, 00:04:16.785 { 00:04:16.785 "subsystem": "scsi", 00:04:16.785 "config": null 00:04:16.785 }, 00:04:16.785 { 00:04:16.785 "subsystem": "scheduler", 00:04:16.785 "config": [ 00:04:16.785 { 00:04:16.785 "method": "framework_set_scheduler", 00:04:16.785 "params": { 00:04:16.785 "name": "static" 00:04:16.785 } 00:04:16.785 } 00:04:16.785 ] 00:04:16.785 }, 00:04:16.785 { 00:04:16.785 "subsystem": "vhost_scsi", 00:04:16.785 "config": [] 00:04:16.785 }, 00:04:16.785 { 00:04:16.785 "subsystem": "vhost_blk", 00:04:16.785 "config": [] 00:04:16.785 }, 00:04:16.785 { 00:04:16.785 "subsystem": "ublk", 00:04:16.785 "config": [] 00:04:16.785 }, 00:04:16.785 { 00:04:16.785 "subsystem": "nbd", 00:04:16.785 "config": [] 00:04:16.785 }, 00:04:16.785 { 00:04:16.785 "subsystem": "nvmf", 00:04:16.785 "config": [ 00:04:16.785 { 00:04:16.785 "method": "nvmf_set_config", 00:04:16.785 "params": { 00:04:16.785 "discovery_filter": "match_any", 00:04:16.785 "admin_cmd_passthru": { 00:04:16.785 "identify_ctrlr": false 00:04:16.785 }, 00:04:16.785 "dhchap_digests": [ 00:04:16.785 "sha256", 00:04:16.785 "sha384", 00:04:16.785 "sha512" 00:04:16.785 ], 00:04:16.785 "dhchap_dhgroups": [ 00:04:16.785 "null", 00:04:16.785 "ffdhe2048", 00:04:16.785 "ffdhe3072", 00:04:16.785 "ffdhe4096", 00:04:16.785 "ffdhe6144", 00:04:16.785 "ffdhe8192" 00:04:16.785 ] 00:04:16.785 } 00:04:16.785 }, 00:04:16.785 { 00:04:16.785 "method": "nvmf_set_max_subsystems", 00:04:16.785 "params": { 00:04:16.785 "max_subsystems": 1024 00:04:16.785 } 00:04:16.785 }, 00:04:16.785 { 00:04:16.785 "method": "nvmf_set_crdt", 00:04:16.785 "params": { 00:04:16.785 "crdt1": 0, 00:04:16.785 "crdt2": 0, 00:04:16.785 "crdt3": 0 00:04:16.785 } 00:04:16.785 }, 00:04:16.785 { 00:04:16.785 "method": "nvmf_create_transport", 00:04:16.785 "params": { 00:04:16.785 "trtype": "TCP", 00:04:16.785 "max_queue_depth": 128, 00:04:16.785 "max_io_qpairs_per_ctrlr": 127, 00:04:16.785 "in_capsule_data_size": 4096, 00:04:16.785 "max_io_size": 131072, 00:04:16.785 "io_unit_size": 131072, 00:04:16.785 "max_aq_depth": 128, 00:04:16.785 "num_shared_buffers": 511, 00:04:16.785 "buf_cache_size": 4294967295, 00:04:16.785 "dif_insert_or_strip": false, 00:04:16.785 "zcopy": false, 00:04:16.785 "c2h_success": true, 00:04:16.785 "sock_priority": 0, 00:04:16.785 "abort_timeout_sec": 1, 00:04:16.785 "ack_timeout": 0, 00:04:16.785 "data_wr_pool_size": 0 00:04:16.785 } 00:04:16.785 } 00:04:16.785 ] 00:04:16.785 }, 00:04:16.785 { 00:04:16.785 "subsystem": "iscsi", 00:04:16.785 "config": [ 00:04:16.785 { 00:04:16.785 "method": "iscsi_set_options", 00:04:16.785 "params": { 00:04:16.785 "node_base": "iqn.2016-06.io.spdk", 00:04:16.785 "max_sessions": 128, 00:04:16.785 "max_connections_per_session": 2, 00:04:16.785 "max_queue_depth": 64, 00:04:16.785 "default_time2wait": 2, 00:04:16.785 "default_time2retain": 20, 00:04:16.785 "first_burst_length": 8192, 00:04:16.785 "immediate_data": true, 00:04:16.785 "allow_duplicated_isid": false, 00:04:16.785 "error_recovery_level": 0, 00:04:16.785 "nop_timeout": 60, 00:04:16.785 "nop_in_interval": 30, 00:04:16.785 "disable_chap": false, 00:04:16.785 "require_chap": false, 00:04:16.785 "mutual_chap": false, 00:04:16.785 "chap_group": 0, 00:04:16.785 "max_large_datain_per_connection": 64, 00:04:16.785 "max_r2t_per_connection": 4, 00:04:16.785 "pdu_pool_size": 36864, 00:04:16.785 "immediate_data_pool_size": 16384, 00:04:16.785 "data_out_pool_size": 2048 00:04:16.785 } 00:04:16.785 } 00:04:16.785 ] 00:04:16.785 } 00:04:16.785 ] 00:04:16.785 } 00:04:16.785 19:59:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:16.785 19:59:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57151 00:04:16.785 19:59:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57151 ']' 00:04:16.785 19:59:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57151 00:04:16.785 19:59:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:16.785 19:59:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:16.785 19:59:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57151 00:04:17.044 killing process with pid 57151 00:04:17.044 19:59:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:17.044 19:59:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:17.045 19:59:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57151' 00:04:17.045 19:59:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57151 00:04:17.045 19:59:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57151 00:04:19.584 19:59:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57207 00:04:19.584 19:59:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:19.584 19:59:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:24.888 19:59:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57207 00:04:24.888 19:59:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57207 ']' 00:04:24.888 19:59:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57207 00:04:24.888 19:59:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:24.888 19:59:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:24.888 19:59:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57207 00:04:24.888 killing process with pid 57207 00:04:24.888 19:59:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:24.888 19:59:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:24.888 19:59:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57207' 00:04:24.888 19:59:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57207 00:04:24.888 19:59:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57207 00:04:26.794 19:59:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:26.794 19:59:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:26.794 00:04:26.794 real 0m11.225s 00:04:26.794 user 0m10.691s 00:04:26.794 sys 0m0.837s 00:04:26.794 ************************************ 00:04:26.794 END TEST skip_rpc_with_json 00:04:26.794 ************************************ 00:04:26.794 19:59:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:26.794 19:59:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:26.794 19:59:58 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:26.794 19:59:58 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:26.794 19:59:58 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:26.794 19:59:58 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:26.794 ************************************ 00:04:26.794 START TEST skip_rpc_with_delay 00:04:26.794 ************************************ 00:04:26.794 19:59:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:04:26.794 19:59:58 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:26.794 19:59:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:04:26.794 19:59:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:26.794 19:59:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:26.794 19:59:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:26.794 19:59:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:26.794 19:59:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:26.794 19:59:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:26.794 19:59:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:26.794 19:59:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:26.794 19:59:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:26.794 19:59:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:26.794 [2024-12-08 19:59:58.676587] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:26.794 19:59:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:04:26.794 19:59:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:26.794 19:59:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:26.794 ************************************ 00:04:26.794 END TEST skip_rpc_with_delay 00:04:26.794 ************************************ 00:04:26.794 19:59:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:26.794 00:04:26.794 real 0m0.169s 00:04:26.794 user 0m0.098s 00:04:26.794 sys 0m0.069s 00:04:26.794 19:59:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:26.794 19:59:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:27.054 19:59:58 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:27.054 19:59:58 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:27.054 19:59:58 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:27.054 19:59:58 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:27.054 19:59:58 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:27.054 19:59:58 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:27.054 ************************************ 00:04:27.054 START TEST exit_on_failed_rpc_init 00:04:27.054 ************************************ 00:04:27.054 19:59:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:04:27.054 19:59:58 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57335 00:04:27.054 19:59:58 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:27.054 19:59:58 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57335 00:04:27.054 19:59:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 57335 ']' 00:04:27.054 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:27.054 19:59:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:27.054 19:59:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:27.054 19:59:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:27.054 19:59:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:27.054 19:59:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:27.054 [2024-12-08 19:59:58.906418] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:04:27.054 [2024-12-08 19:59:58.906549] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57335 ] 00:04:27.313 [2024-12-08 19:59:59.079975] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:27.313 [2024-12-08 19:59:59.193035] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:28.250 19:59:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:28.250 19:59:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:04:28.250 19:59:59 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:28.250 19:59:59 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:28.250 19:59:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:04:28.250 19:59:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:28.250 19:59:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:28.250 19:59:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:28.250 20:00:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:28.250 19:59:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:28.250 20:00:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:28.250 19:59:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:28.250 20:00:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:28.250 20:00:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:28.250 20:00:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:28.250 [2024-12-08 20:00:00.106714] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:04:28.250 [2024-12-08 20:00:00.106919] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57353 ] 00:04:28.509 [2024-12-08 20:00:00.278518] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:28.509 [2024-12-08 20:00:00.390026] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:28.509 [2024-12-08 20:00:00.390116] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:28.509 [2024-12-08 20:00:00.390129] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:28.509 [2024-12-08 20:00:00.390140] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:28.769 20:00:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:04:28.769 20:00:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:28.769 20:00:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:04:28.769 20:00:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:04:28.769 20:00:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:04:28.769 20:00:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:28.769 20:00:00 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:28.769 20:00:00 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57335 00:04:28.769 20:00:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 57335 ']' 00:04:28.769 20:00:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 57335 00:04:28.769 20:00:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:04:28.769 20:00:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:28.769 20:00:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57335 00:04:28.769 killing process with pid 57335 00:04:28.769 20:00:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:28.769 20:00:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:28.769 20:00:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57335' 00:04:28.769 20:00:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 57335 00:04:28.769 20:00:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 57335 00:04:31.307 00:04:31.307 real 0m4.198s 00:04:31.307 user 0m4.489s 00:04:31.307 sys 0m0.572s 00:04:31.307 20:00:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:31.307 ************************************ 00:04:31.307 END TEST exit_on_failed_rpc_init 00:04:31.307 ************************************ 00:04:31.307 20:00:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:31.307 20:00:03 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:31.307 ************************************ 00:04:31.307 END TEST skip_rpc 00:04:31.307 ************************************ 00:04:31.307 00:04:31.307 real 0m23.530s 00:04:31.307 user 0m22.501s 00:04:31.307 sys 0m2.129s 00:04:31.307 20:00:03 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:31.307 20:00:03 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:31.307 20:00:03 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:31.307 20:00:03 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:31.307 20:00:03 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:31.307 20:00:03 -- common/autotest_common.sh@10 -- # set +x 00:04:31.307 ************************************ 00:04:31.307 START TEST rpc_client 00:04:31.307 ************************************ 00:04:31.307 20:00:03 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:31.307 * Looking for test storage... 00:04:31.307 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:04:31.307 20:00:03 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:31.307 20:00:03 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:04:31.307 20:00:03 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:31.566 20:00:03 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:31.566 20:00:03 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:31.566 20:00:03 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:31.566 20:00:03 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:31.566 20:00:03 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:31.566 20:00:03 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:31.566 20:00:03 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:31.566 20:00:03 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:31.566 20:00:03 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:31.566 20:00:03 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:31.566 20:00:03 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:31.566 20:00:03 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:31.566 20:00:03 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:31.566 20:00:03 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:31.566 20:00:03 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:31.566 20:00:03 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:31.566 20:00:03 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:31.566 20:00:03 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:31.566 20:00:03 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:31.566 20:00:03 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:31.566 20:00:03 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:31.566 20:00:03 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:31.566 20:00:03 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:31.566 20:00:03 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:31.566 20:00:03 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:31.566 20:00:03 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:31.567 20:00:03 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:31.567 20:00:03 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:31.567 20:00:03 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:31.567 20:00:03 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:31.567 20:00:03 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:31.567 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:31.567 --rc genhtml_branch_coverage=1 00:04:31.567 --rc genhtml_function_coverage=1 00:04:31.567 --rc genhtml_legend=1 00:04:31.567 --rc geninfo_all_blocks=1 00:04:31.567 --rc geninfo_unexecuted_blocks=1 00:04:31.567 00:04:31.567 ' 00:04:31.567 20:00:03 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:31.567 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:31.567 --rc genhtml_branch_coverage=1 00:04:31.567 --rc genhtml_function_coverage=1 00:04:31.567 --rc genhtml_legend=1 00:04:31.567 --rc geninfo_all_blocks=1 00:04:31.567 --rc geninfo_unexecuted_blocks=1 00:04:31.567 00:04:31.567 ' 00:04:31.567 20:00:03 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:31.567 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:31.567 --rc genhtml_branch_coverage=1 00:04:31.567 --rc genhtml_function_coverage=1 00:04:31.567 --rc genhtml_legend=1 00:04:31.567 --rc geninfo_all_blocks=1 00:04:31.567 --rc geninfo_unexecuted_blocks=1 00:04:31.567 00:04:31.567 ' 00:04:31.567 20:00:03 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:31.567 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:31.567 --rc genhtml_branch_coverage=1 00:04:31.567 --rc genhtml_function_coverage=1 00:04:31.567 --rc genhtml_legend=1 00:04:31.567 --rc geninfo_all_blocks=1 00:04:31.567 --rc geninfo_unexecuted_blocks=1 00:04:31.567 00:04:31.567 ' 00:04:31.567 20:00:03 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:04:31.567 OK 00:04:31.567 20:00:03 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:31.567 00:04:31.567 real 0m0.294s 00:04:31.567 user 0m0.152s 00:04:31.567 sys 0m0.157s 00:04:31.567 20:00:03 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:31.567 20:00:03 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:31.567 ************************************ 00:04:31.567 END TEST rpc_client 00:04:31.567 ************************************ 00:04:31.567 20:00:03 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:31.567 20:00:03 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:31.567 20:00:03 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:31.567 20:00:03 -- common/autotest_common.sh@10 -- # set +x 00:04:31.567 ************************************ 00:04:31.567 START TEST json_config 00:04:31.567 ************************************ 00:04:31.567 20:00:03 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:31.827 20:00:03 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:31.827 20:00:03 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:04:31.827 20:00:03 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:31.828 20:00:03 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:31.828 20:00:03 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:31.828 20:00:03 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:31.828 20:00:03 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:31.828 20:00:03 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:31.828 20:00:03 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:31.828 20:00:03 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:31.828 20:00:03 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:31.828 20:00:03 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:31.828 20:00:03 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:31.828 20:00:03 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:31.828 20:00:03 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:31.828 20:00:03 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:31.828 20:00:03 json_config -- scripts/common.sh@345 -- # : 1 00:04:31.828 20:00:03 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:31.828 20:00:03 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:31.828 20:00:03 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:31.828 20:00:03 json_config -- scripts/common.sh@353 -- # local d=1 00:04:31.828 20:00:03 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:31.828 20:00:03 json_config -- scripts/common.sh@355 -- # echo 1 00:04:31.828 20:00:03 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:31.828 20:00:03 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:31.828 20:00:03 json_config -- scripts/common.sh@353 -- # local d=2 00:04:31.828 20:00:03 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:31.828 20:00:03 json_config -- scripts/common.sh@355 -- # echo 2 00:04:31.828 20:00:03 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:31.828 20:00:03 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:31.828 20:00:03 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:31.828 20:00:03 json_config -- scripts/common.sh@368 -- # return 0 00:04:31.828 20:00:03 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:31.828 20:00:03 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:31.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:31.828 --rc genhtml_branch_coverage=1 00:04:31.828 --rc genhtml_function_coverage=1 00:04:31.828 --rc genhtml_legend=1 00:04:31.828 --rc geninfo_all_blocks=1 00:04:31.828 --rc geninfo_unexecuted_blocks=1 00:04:31.828 00:04:31.828 ' 00:04:31.828 20:00:03 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:31.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:31.828 --rc genhtml_branch_coverage=1 00:04:31.828 --rc genhtml_function_coverage=1 00:04:31.828 --rc genhtml_legend=1 00:04:31.828 --rc geninfo_all_blocks=1 00:04:31.828 --rc geninfo_unexecuted_blocks=1 00:04:31.828 00:04:31.828 ' 00:04:31.828 20:00:03 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:31.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:31.828 --rc genhtml_branch_coverage=1 00:04:31.828 --rc genhtml_function_coverage=1 00:04:31.828 --rc genhtml_legend=1 00:04:31.828 --rc geninfo_all_blocks=1 00:04:31.828 --rc geninfo_unexecuted_blocks=1 00:04:31.828 00:04:31.828 ' 00:04:31.828 20:00:03 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:31.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:31.828 --rc genhtml_branch_coverage=1 00:04:31.828 --rc genhtml_function_coverage=1 00:04:31.828 --rc genhtml_legend=1 00:04:31.828 --rc geninfo_all_blocks=1 00:04:31.828 --rc geninfo_unexecuted_blocks=1 00:04:31.828 00:04:31.828 ' 00:04:31.828 20:00:03 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:31.828 20:00:03 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:31.828 20:00:03 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:31.828 20:00:03 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:31.828 20:00:03 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:31.828 20:00:03 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:31.828 20:00:03 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:31.828 20:00:03 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:31.828 20:00:03 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:31.828 20:00:03 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:31.828 20:00:03 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:31.828 20:00:03 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:31.828 20:00:03 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ef0d36f9-96c7-4fe2-b5c9-cb1956b56ec5 00:04:31.828 20:00:03 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=ef0d36f9-96c7-4fe2-b5c9-cb1956b56ec5 00:04:31.828 20:00:03 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:31.828 20:00:03 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:31.828 20:00:03 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:31.828 20:00:03 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:31.828 20:00:03 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:31.828 20:00:03 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:31.828 20:00:03 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:31.828 20:00:03 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:31.828 20:00:03 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:31.828 20:00:03 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:31.828 20:00:03 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:31.828 20:00:03 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:31.828 20:00:03 json_config -- paths/export.sh@5 -- # export PATH 00:04:31.828 20:00:03 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:31.828 20:00:03 json_config -- nvmf/common.sh@51 -- # : 0 00:04:31.828 20:00:03 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:31.828 20:00:03 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:31.828 20:00:03 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:31.828 20:00:03 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:31.828 20:00:03 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:31.828 20:00:03 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:31.828 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:31.828 20:00:03 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:31.828 20:00:03 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:31.828 20:00:03 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:31.828 20:00:03 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:31.828 20:00:03 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:31.828 20:00:03 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:31.828 20:00:03 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:31.828 20:00:03 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:31.828 20:00:03 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:04:31.828 WARNING: No tests are enabled so not running JSON configuration tests 00:04:31.828 20:00:03 json_config -- json_config/json_config.sh@28 -- # exit 0 00:04:31.828 00:04:31.828 real 0m0.229s 00:04:31.828 user 0m0.139s 00:04:31.828 sys 0m0.093s 00:04:31.828 20:00:03 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:31.828 20:00:03 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:31.828 ************************************ 00:04:31.828 END TEST json_config 00:04:31.828 ************************************ 00:04:31.828 20:00:03 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:31.828 20:00:03 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:31.828 20:00:03 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:31.828 20:00:03 -- common/autotest_common.sh@10 -- # set +x 00:04:31.828 ************************************ 00:04:31.828 START TEST json_config_extra_key 00:04:31.828 ************************************ 00:04:31.828 20:00:03 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:32.089 20:00:03 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:32.089 20:00:03 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:04:32.089 20:00:03 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:32.089 20:00:03 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:32.089 20:00:03 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:32.089 20:00:03 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:32.089 20:00:03 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:32.089 20:00:03 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:32.089 20:00:03 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:32.089 20:00:03 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:32.089 20:00:03 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:32.089 20:00:03 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:32.089 20:00:03 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:32.089 20:00:03 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:32.089 20:00:03 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:32.089 20:00:03 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:32.089 20:00:03 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:32.089 20:00:03 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:32.089 20:00:03 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:32.089 20:00:03 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:32.089 20:00:03 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:32.089 20:00:03 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:32.089 20:00:03 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:32.089 20:00:03 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:32.089 20:00:03 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:32.089 20:00:03 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:32.089 20:00:03 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:32.089 20:00:03 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:32.089 20:00:03 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:32.089 20:00:03 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:32.089 20:00:03 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:32.089 20:00:03 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:32.089 20:00:03 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:32.089 20:00:03 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:32.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.089 --rc genhtml_branch_coverage=1 00:04:32.089 --rc genhtml_function_coverage=1 00:04:32.089 --rc genhtml_legend=1 00:04:32.089 --rc geninfo_all_blocks=1 00:04:32.089 --rc geninfo_unexecuted_blocks=1 00:04:32.089 00:04:32.089 ' 00:04:32.089 20:00:03 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:32.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.089 --rc genhtml_branch_coverage=1 00:04:32.089 --rc genhtml_function_coverage=1 00:04:32.089 --rc genhtml_legend=1 00:04:32.089 --rc geninfo_all_blocks=1 00:04:32.089 --rc geninfo_unexecuted_blocks=1 00:04:32.089 00:04:32.089 ' 00:04:32.089 20:00:03 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:32.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.089 --rc genhtml_branch_coverage=1 00:04:32.089 --rc genhtml_function_coverage=1 00:04:32.089 --rc genhtml_legend=1 00:04:32.089 --rc geninfo_all_blocks=1 00:04:32.089 --rc geninfo_unexecuted_blocks=1 00:04:32.089 00:04:32.089 ' 00:04:32.089 20:00:03 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:32.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.089 --rc genhtml_branch_coverage=1 00:04:32.089 --rc genhtml_function_coverage=1 00:04:32.089 --rc genhtml_legend=1 00:04:32.089 --rc geninfo_all_blocks=1 00:04:32.089 --rc geninfo_unexecuted_blocks=1 00:04:32.089 00:04:32.089 ' 00:04:32.089 20:00:03 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:32.089 20:00:03 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:32.089 20:00:03 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:32.089 20:00:03 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:32.089 20:00:03 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:32.089 20:00:03 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:32.089 20:00:03 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:32.089 20:00:03 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:32.089 20:00:03 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:32.089 20:00:03 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:32.089 20:00:03 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:32.089 20:00:03 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:32.089 20:00:03 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ef0d36f9-96c7-4fe2-b5c9-cb1956b56ec5 00:04:32.089 20:00:03 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=ef0d36f9-96c7-4fe2-b5c9-cb1956b56ec5 00:04:32.089 20:00:03 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:32.089 20:00:03 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:32.089 20:00:03 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:32.089 20:00:03 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:32.089 20:00:03 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:32.089 20:00:03 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:32.089 20:00:03 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:32.089 20:00:03 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:32.089 20:00:03 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:32.089 20:00:03 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:32.089 20:00:03 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:32.089 20:00:03 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:32.089 20:00:03 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:32.089 20:00:03 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:32.089 20:00:03 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:32.089 20:00:03 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:32.089 20:00:03 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:32.089 20:00:03 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:32.089 20:00:03 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:32.089 20:00:03 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:32.089 20:00:03 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:32.089 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:32.089 20:00:03 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:32.089 20:00:03 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:32.089 20:00:03 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:32.089 20:00:03 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:32.089 20:00:03 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:32.089 20:00:03 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:32.089 20:00:03 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:32.089 20:00:03 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:32.089 20:00:03 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:32.089 20:00:03 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:32.089 20:00:03 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:04:32.089 20:00:03 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:32.090 20:00:03 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:32.090 20:00:03 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:32.090 INFO: launching applications... 00:04:32.090 20:00:03 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:32.090 20:00:03 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:32.090 20:00:03 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:32.090 20:00:03 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:32.090 20:00:03 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:32.090 20:00:03 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:32.090 20:00:03 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:32.090 20:00:03 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:32.090 20:00:03 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57563 00:04:32.090 20:00:03 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:32.090 Waiting for target to run... 00:04:32.090 20:00:03 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57563 /var/tmp/spdk_tgt.sock 00:04:32.090 20:00:03 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 57563 ']' 00:04:32.090 20:00:03 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:32.090 20:00:03 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:32.090 20:00:03 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:32.090 20:00:03 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:32.090 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:32.090 20:00:03 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:32.090 20:00:03 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:32.350 [2024-12-08 20:00:04.082898] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:04:32.350 [2024-12-08 20:00:04.083049] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57563 ] 00:04:32.609 [2024-12-08 20:00:04.471069] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:32.609 [2024-12-08 20:00:04.573265] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:33.549 00:04:33.549 INFO: shutting down applications... 00:04:33.549 20:00:05 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:33.549 20:00:05 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:04:33.549 20:00:05 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:33.549 20:00:05 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:33.549 20:00:05 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:33.549 20:00:05 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:33.549 20:00:05 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:33.549 20:00:05 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57563 ]] 00:04:33.549 20:00:05 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57563 00:04:33.549 20:00:05 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:33.549 20:00:05 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:33.549 20:00:05 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57563 00:04:33.549 20:00:05 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:33.808 20:00:05 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:33.808 20:00:05 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:33.808 20:00:05 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57563 00:04:33.808 20:00:05 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:34.387 20:00:06 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:34.387 20:00:06 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:34.387 20:00:06 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57563 00:04:34.387 20:00:06 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:34.967 20:00:06 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:34.967 20:00:06 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:34.967 20:00:06 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57563 00:04:34.967 20:00:06 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:35.536 20:00:07 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:35.536 20:00:07 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:35.536 20:00:07 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57563 00:04:35.536 20:00:07 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:36.104 20:00:07 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:36.104 20:00:07 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:36.104 20:00:07 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57563 00:04:36.104 20:00:07 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:36.363 20:00:08 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:36.363 20:00:08 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:36.363 20:00:08 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57563 00:04:36.363 SPDK target shutdown done 00:04:36.363 Success 00:04:36.363 20:00:08 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:36.363 20:00:08 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:36.363 20:00:08 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:36.363 20:00:08 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:36.363 20:00:08 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:36.363 ************************************ 00:04:36.363 END TEST json_config_extra_key 00:04:36.363 ************************************ 00:04:36.363 00:04:36.363 real 0m4.511s 00:04:36.363 user 0m3.923s 00:04:36.363 sys 0m0.536s 00:04:36.363 20:00:08 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:36.363 20:00:08 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:36.623 20:00:08 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:36.623 20:00:08 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:36.623 20:00:08 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:36.623 20:00:08 -- common/autotest_common.sh@10 -- # set +x 00:04:36.623 ************************************ 00:04:36.623 START TEST alias_rpc 00:04:36.623 ************************************ 00:04:36.623 20:00:08 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:36.623 * Looking for test storage... 00:04:36.623 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:04:36.623 20:00:08 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:36.623 20:00:08 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:36.623 20:00:08 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:36.623 20:00:08 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:36.623 20:00:08 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:36.623 20:00:08 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:36.623 20:00:08 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:36.623 20:00:08 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:36.623 20:00:08 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:36.623 20:00:08 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:36.623 20:00:08 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:36.623 20:00:08 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:36.623 20:00:08 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:36.623 20:00:08 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:36.623 20:00:08 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:36.623 20:00:08 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:36.623 20:00:08 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:36.623 20:00:08 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:36.623 20:00:08 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:36.623 20:00:08 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:36.623 20:00:08 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:36.623 20:00:08 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:36.623 20:00:08 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:36.623 20:00:08 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:36.623 20:00:08 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:36.623 20:00:08 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:36.623 20:00:08 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:36.623 20:00:08 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:36.623 20:00:08 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:36.623 20:00:08 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:36.623 20:00:08 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:36.623 20:00:08 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:36.623 20:00:08 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:36.623 20:00:08 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:36.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.623 --rc genhtml_branch_coverage=1 00:04:36.623 --rc genhtml_function_coverage=1 00:04:36.623 --rc genhtml_legend=1 00:04:36.623 --rc geninfo_all_blocks=1 00:04:36.623 --rc geninfo_unexecuted_blocks=1 00:04:36.623 00:04:36.623 ' 00:04:36.623 20:00:08 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:36.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.623 --rc genhtml_branch_coverage=1 00:04:36.623 --rc genhtml_function_coverage=1 00:04:36.623 --rc genhtml_legend=1 00:04:36.623 --rc geninfo_all_blocks=1 00:04:36.623 --rc geninfo_unexecuted_blocks=1 00:04:36.623 00:04:36.623 ' 00:04:36.623 20:00:08 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:36.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.623 --rc genhtml_branch_coverage=1 00:04:36.623 --rc genhtml_function_coverage=1 00:04:36.623 --rc genhtml_legend=1 00:04:36.623 --rc geninfo_all_blocks=1 00:04:36.623 --rc geninfo_unexecuted_blocks=1 00:04:36.623 00:04:36.623 ' 00:04:36.623 20:00:08 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:36.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.623 --rc genhtml_branch_coverage=1 00:04:36.623 --rc genhtml_function_coverage=1 00:04:36.623 --rc genhtml_legend=1 00:04:36.623 --rc geninfo_all_blocks=1 00:04:36.623 --rc geninfo_unexecuted_blocks=1 00:04:36.623 00:04:36.623 ' 00:04:36.623 20:00:08 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:36.623 20:00:08 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:36.623 20:00:08 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57676 00:04:36.623 20:00:08 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57676 00:04:36.623 20:00:08 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 57676 ']' 00:04:36.623 20:00:08 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:36.623 20:00:08 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:36.623 20:00:08 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:36.623 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:36.623 20:00:08 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:36.623 20:00:08 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:36.883 [2024-12-08 20:00:08.649634] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:04:36.883 [2024-12-08 20:00:08.649843] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57676 ] 00:04:36.883 [2024-12-08 20:00:08.821781] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:37.142 [2024-12-08 20:00:08.931394] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:38.078 20:00:09 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:38.078 20:00:09 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:38.078 20:00:09 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:04:38.078 20:00:09 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57676 00:04:38.078 20:00:09 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 57676 ']' 00:04:38.078 20:00:09 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 57676 00:04:38.078 20:00:09 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:04:38.078 20:00:09 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:38.078 20:00:09 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57676 00:04:38.078 20:00:10 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:38.078 20:00:10 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:38.078 20:00:10 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57676' 00:04:38.078 killing process with pid 57676 00:04:38.078 20:00:10 alias_rpc -- common/autotest_common.sh@973 -- # kill 57676 00:04:38.078 20:00:10 alias_rpc -- common/autotest_common.sh@978 -- # wait 57676 00:04:40.620 00:04:40.620 real 0m3.966s 00:04:40.620 user 0m4.007s 00:04:40.620 sys 0m0.543s 00:04:40.620 20:00:12 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:40.620 20:00:12 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:40.620 ************************************ 00:04:40.620 END TEST alias_rpc 00:04:40.620 ************************************ 00:04:40.620 20:00:12 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:04:40.620 20:00:12 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:40.620 20:00:12 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:40.620 20:00:12 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:40.620 20:00:12 -- common/autotest_common.sh@10 -- # set +x 00:04:40.620 ************************************ 00:04:40.620 START TEST spdkcli_tcp 00:04:40.620 ************************************ 00:04:40.620 20:00:12 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:40.620 * Looking for test storage... 00:04:40.620 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:04:40.620 20:00:12 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:40.620 20:00:12 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:04:40.620 20:00:12 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:40.620 20:00:12 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:40.620 20:00:12 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:40.620 20:00:12 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:40.620 20:00:12 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:40.620 20:00:12 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:40.620 20:00:12 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:40.620 20:00:12 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:40.620 20:00:12 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:40.620 20:00:12 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:40.620 20:00:12 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:40.620 20:00:12 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:40.620 20:00:12 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:40.620 20:00:12 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:40.620 20:00:12 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:04:40.620 20:00:12 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:40.620 20:00:12 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:40.620 20:00:12 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:40.620 20:00:12 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:04:40.620 20:00:12 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:40.620 20:00:12 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:04:40.620 20:00:12 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:40.880 20:00:12 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:40.880 20:00:12 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:04:40.880 20:00:12 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:40.880 20:00:12 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:04:40.880 20:00:12 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:40.880 20:00:12 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:40.880 20:00:12 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:40.880 20:00:12 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:04:40.880 20:00:12 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:40.880 20:00:12 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:40.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.880 --rc genhtml_branch_coverage=1 00:04:40.880 --rc genhtml_function_coverage=1 00:04:40.880 --rc genhtml_legend=1 00:04:40.880 --rc geninfo_all_blocks=1 00:04:40.880 --rc geninfo_unexecuted_blocks=1 00:04:40.880 00:04:40.880 ' 00:04:40.880 20:00:12 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:40.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.880 --rc genhtml_branch_coverage=1 00:04:40.880 --rc genhtml_function_coverage=1 00:04:40.880 --rc genhtml_legend=1 00:04:40.880 --rc geninfo_all_blocks=1 00:04:40.880 --rc geninfo_unexecuted_blocks=1 00:04:40.880 00:04:40.880 ' 00:04:40.880 20:00:12 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:40.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.880 --rc genhtml_branch_coverage=1 00:04:40.880 --rc genhtml_function_coverage=1 00:04:40.880 --rc genhtml_legend=1 00:04:40.880 --rc geninfo_all_blocks=1 00:04:40.880 --rc geninfo_unexecuted_blocks=1 00:04:40.880 00:04:40.880 ' 00:04:40.880 20:00:12 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:40.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.880 --rc genhtml_branch_coverage=1 00:04:40.880 --rc genhtml_function_coverage=1 00:04:40.880 --rc genhtml_legend=1 00:04:40.880 --rc geninfo_all_blocks=1 00:04:40.880 --rc geninfo_unexecuted_blocks=1 00:04:40.880 00:04:40.880 ' 00:04:40.880 20:00:12 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:04:40.880 20:00:12 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:04:40.880 20:00:12 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:04:40.880 20:00:12 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:40.880 20:00:12 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:40.880 20:00:12 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:40.880 20:00:12 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:40.880 20:00:12 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:40.880 20:00:12 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:40.880 20:00:12 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=57783 00:04:40.880 20:00:12 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:40.880 20:00:12 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 57783 00:04:40.880 20:00:12 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 57783 ']' 00:04:40.880 20:00:12 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:40.880 20:00:12 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:40.880 20:00:12 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:40.880 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:40.880 20:00:12 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:40.880 20:00:12 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:40.880 [2024-12-08 20:00:12.712432] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:04:40.880 [2024-12-08 20:00:12.712627] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57783 ] 00:04:41.141 [2024-12-08 20:00:12.886008] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:41.141 [2024-12-08 20:00:12.997652] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:41.141 [2024-12-08 20:00:12.997659] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:42.080 20:00:13 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:42.081 20:00:13 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:04:42.081 20:00:13 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=57800 00:04:42.081 20:00:13 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:42.081 20:00:13 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:42.081 [ 00:04:42.081 "bdev_malloc_delete", 00:04:42.081 "bdev_malloc_create", 00:04:42.081 "bdev_null_resize", 00:04:42.081 "bdev_null_delete", 00:04:42.081 "bdev_null_create", 00:04:42.081 "bdev_nvme_cuse_unregister", 00:04:42.081 "bdev_nvme_cuse_register", 00:04:42.081 "bdev_opal_new_user", 00:04:42.081 "bdev_opal_set_lock_state", 00:04:42.081 "bdev_opal_delete", 00:04:42.081 "bdev_opal_get_info", 00:04:42.081 "bdev_opal_create", 00:04:42.081 "bdev_nvme_opal_revert", 00:04:42.081 "bdev_nvme_opal_init", 00:04:42.081 "bdev_nvme_send_cmd", 00:04:42.081 "bdev_nvme_set_keys", 00:04:42.081 "bdev_nvme_get_path_iostat", 00:04:42.081 "bdev_nvme_get_mdns_discovery_info", 00:04:42.081 "bdev_nvme_stop_mdns_discovery", 00:04:42.081 "bdev_nvme_start_mdns_discovery", 00:04:42.081 "bdev_nvme_set_multipath_policy", 00:04:42.081 "bdev_nvme_set_preferred_path", 00:04:42.081 "bdev_nvme_get_io_paths", 00:04:42.081 "bdev_nvme_remove_error_injection", 00:04:42.081 "bdev_nvme_add_error_injection", 00:04:42.081 "bdev_nvme_get_discovery_info", 00:04:42.081 "bdev_nvme_stop_discovery", 00:04:42.081 "bdev_nvme_start_discovery", 00:04:42.081 "bdev_nvme_get_controller_health_info", 00:04:42.081 "bdev_nvme_disable_controller", 00:04:42.081 "bdev_nvme_enable_controller", 00:04:42.081 "bdev_nvme_reset_controller", 00:04:42.081 "bdev_nvme_get_transport_statistics", 00:04:42.081 "bdev_nvme_apply_firmware", 00:04:42.081 "bdev_nvme_detach_controller", 00:04:42.081 "bdev_nvme_get_controllers", 00:04:42.081 "bdev_nvme_attach_controller", 00:04:42.081 "bdev_nvme_set_hotplug", 00:04:42.081 "bdev_nvme_set_options", 00:04:42.081 "bdev_passthru_delete", 00:04:42.081 "bdev_passthru_create", 00:04:42.081 "bdev_lvol_set_parent_bdev", 00:04:42.081 "bdev_lvol_set_parent", 00:04:42.081 "bdev_lvol_check_shallow_copy", 00:04:42.081 "bdev_lvol_start_shallow_copy", 00:04:42.081 "bdev_lvol_grow_lvstore", 00:04:42.081 "bdev_lvol_get_lvols", 00:04:42.081 "bdev_lvol_get_lvstores", 00:04:42.081 "bdev_lvol_delete", 00:04:42.081 "bdev_lvol_set_read_only", 00:04:42.081 "bdev_lvol_resize", 00:04:42.081 "bdev_lvol_decouple_parent", 00:04:42.081 "bdev_lvol_inflate", 00:04:42.081 "bdev_lvol_rename", 00:04:42.081 "bdev_lvol_clone_bdev", 00:04:42.081 "bdev_lvol_clone", 00:04:42.081 "bdev_lvol_snapshot", 00:04:42.081 "bdev_lvol_create", 00:04:42.081 "bdev_lvol_delete_lvstore", 00:04:42.081 "bdev_lvol_rename_lvstore", 00:04:42.081 "bdev_lvol_create_lvstore", 00:04:42.081 "bdev_raid_set_options", 00:04:42.081 "bdev_raid_remove_base_bdev", 00:04:42.081 "bdev_raid_add_base_bdev", 00:04:42.081 "bdev_raid_delete", 00:04:42.081 "bdev_raid_create", 00:04:42.081 "bdev_raid_get_bdevs", 00:04:42.081 "bdev_error_inject_error", 00:04:42.081 "bdev_error_delete", 00:04:42.081 "bdev_error_create", 00:04:42.081 "bdev_split_delete", 00:04:42.081 "bdev_split_create", 00:04:42.081 "bdev_delay_delete", 00:04:42.081 "bdev_delay_create", 00:04:42.081 "bdev_delay_update_latency", 00:04:42.081 "bdev_zone_block_delete", 00:04:42.081 "bdev_zone_block_create", 00:04:42.081 "blobfs_create", 00:04:42.081 "blobfs_detect", 00:04:42.081 "blobfs_set_cache_size", 00:04:42.081 "bdev_aio_delete", 00:04:42.081 "bdev_aio_rescan", 00:04:42.081 "bdev_aio_create", 00:04:42.081 "bdev_ftl_set_property", 00:04:42.081 "bdev_ftl_get_properties", 00:04:42.081 "bdev_ftl_get_stats", 00:04:42.081 "bdev_ftl_unmap", 00:04:42.081 "bdev_ftl_unload", 00:04:42.081 "bdev_ftl_delete", 00:04:42.081 "bdev_ftl_load", 00:04:42.081 "bdev_ftl_create", 00:04:42.081 "bdev_virtio_attach_controller", 00:04:42.081 "bdev_virtio_scsi_get_devices", 00:04:42.081 "bdev_virtio_detach_controller", 00:04:42.081 "bdev_virtio_blk_set_hotplug", 00:04:42.081 "bdev_iscsi_delete", 00:04:42.081 "bdev_iscsi_create", 00:04:42.081 "bdev_iscsi_set_options", 00:04:42.081 "accel_error_inject_error", 00:04:42.081 "ioat_scan_accel_module", 00:04:42.081 "dsa_scan_accel_module", 00:04:42.081 "iaa_scan_accel_module", 00:04:42.081 "keyring_file_remove_key", 00:04:42.081 "keyring_file_add_key", 00:04:42.081 "keyring_linux_set_options", 00:04:42.081 "fsdev_aio_delete", 00:04:42.081 "fsdev_aio_create", 00:04:42.081 "iscsi_get_histogram", 00:04:42.081 "iscsi_enable_histogram", 00:04:42.081 "iscsi_set_options", 00:04:42.081 "iscsi_get_auth_groups", 00:04:42.081 "iscsi_auth_group_remove_secret", 00:04:42.081 "iscsi_auth_group_add_secret", 00:04:42.081 "iscsi_delete_auth_group", 00:04:42.081 "iscsi_create_auth_group", 00:04:42.081 "iscsi_set_discovery_auth", 00:04:42.081 "iscsi_get_options", 00:04:42.081 "iscsi_target_node_request_logout", 00:04:42.081 "iscsi_target_node_set_redirect", 00:04:42.081 "iscsi_target_node_set_auth", 00:04:42.081 "iscsi_target_node_add_lun", 00:04:42.081 "iscsi_get_stats", 00:04:42.081 "iscsi_get_connections", 00:04:42.081 "iscsi_portal_group_set_auth", 00:04:42.081 "iscsi_start_portal_group", 00:04:42.081 "iscsi_delete_portal_group", 00:04:42.081 "iscsi_create_portal_group", 00:04:42.081 "iscsi_get_portal_groups", 00:04:42.081 "iscsi_delete_target_node", 00:04:42.081 "iscsi_target_node_remove_pg_ig_maps", 00:04:42.081 "iscsi_target_node_add_pg_ig_maps", 00:04:42.081 "iscsi_create_target_node", 00:04:42.081 "iscsi_get_target_nodes", 00:04:42.081 "iscsi_delete_initiator_group", 00:04:42.081 "iscsi_initiator_group_remove_initiators", 00:04:42.081 "iscsi_initiator_group_add_initiators", 00:04:42.081 "iscsi_create_initiator_group", 00:04:42.081 "iscsi_get_initiator_groups", 00:04:42.081 "nvmf_set_crdt", 00:04:42.081 "nvmf_set_config", 00:04:42.081 "nvmf_set_max_subsystems", 00:04:42.081 "nvmf_stop_mdns_prr", 00:04:42.081 "nvmf_publish_mdns_prr", 00:04:42.081 "nvmf_subsystem_get_listeners", 00:04:42.081 "nvmf_subsystem_get_qpairs", 00:04:42.081 "nvmf_subsystem_get_controllers", 00:04:42.081 "nvmf_get_stats", 00:04:42.081 "nvmf_get_transports", 00:04:42.081 "nvmf_create_transport", 00:04:42.081 "nvmf_get_targets", 00:04:42.081 "nvmf_delete_target", 00:04:42.081 "nvmf_create_target", 00:04:42.081 "nvmf_subsystem_allow_any_host", 00:04:42.081 "nvmf_subsystem_set_keys", 00:04:42.081 "nvmf_subsystem_remove_host", 00:04:42.081 "nvmf_subsystem_add_host", 00:04:42.081 "nvmf_ns_remove_host", 00:04:42.081 "nvmf_ns_add_host", 00:04:42.081 "nvmf_subsystem_remove_ns", 00:04:42.081 "nvmf_subsystem_set_ns_ana_group", 00:04:42.081 "nvmf_subsystem_add_ns", 00:04:42.081 "nvmf_subsystem_listener_set_ana_state", 00:04:42.081 "nvmf_discovery_get_referrals", 00:04:42.081 "nvmf_discovery_remove_referral", 00:04:42.081 "nvmf_discovery_add_referral", 00:04:42.081 "nvmf_subsystem_remove_listener", 00:04:42.081 "nvmf_subsystem_add_listener", 00:04:42.081 "nvmf_delete_subsystem", 00:04:42.081 "nvmf_create_subsystem", 00:04:42.081 "nvmf_get_subsystems", 00:04:42.081 "env_dpdk_get_mem_stats", 00:04:42.081 "nbd_get_disks", 00:04:42.081 "nbd_stop_disk", 00:04:42.081 "nbd_start_disk", 00:04:42.081 "ublk_recover_disk", 00:04:42.081 "ublk_get_disks", 00:04:42.081 "ublk_stop_disk", 00:04:42.081 "ublk_start_disk", 00:04:42.081 "ublk_destroy_target", 00:04:42.081 "ublk_create_target", 00:04:42.081 "virtio_blk_create_transport", 00:04:42.081 "virtio_blk_get_transports", 00:04:42.081 "vhost_controller_set_coalescing", 00:04:42.081 "vhost_get_controllers", 00:04:42.081 "vhost_delete_controller", 00:04:42.081 "vhost_create_blk_controller", 00:04:42.081 "vhost_scsi_controller_remove_target", 00:04:42.081 "vhost_scsi_controller_add_target", 00:04:42.081 "vhost_start_scsi_controller", 00:04:42.081 "vhost_create_scsi_controller", 00:04:42.081 "thread_set_cpumask", 00:04:42.081 "scheduler_set_options", 00:04:42.081 "framework_get_governor", 00:04:42.081 "framework_get_scheduler", 00:04:42.081 "framework_set_scheduler", 00:04:42.081 "framework_get_reactors", 00:04:42.081 "thread_get_io_channels", 00:04:42.081 "thread_get_pollers", 00:04:42.081 "thread_get_stats", 00:04:42.081 "framework_monitor_context_switch", 00:04:42.081 "spdk_kill_instance", 00:04:42.081 "log_enable_timestamps", 00:04:42.081 "log_get_flags", 00:04:42.081 "log_clear_flag", 00:04:42.081 "log_set_flag", 00:04:42.081 "log_get_level", 00:04:42.081 "log_set_level", 00:04:42.081 "log_get_print_level", 00:04:42.081 "log_set_print_level", 00:04:42.081 "framework_enable_cpumask_locks", 00:04:42.081 "framework_disable_cpumask_locks", 00:04:42.081 "framework_wait_init", 00:04:42.081 "framework_start_init", 00:04:42.081 "scsi_get_devices", 00:04:42.081 "bdev_get_histogram", 00:04:42.081 "bdev_enable_histogram", 00:04:42.081 "bdev_set_qos_limit", 00:04:42.081 "bdev_set_qd_sampling_period", 00:04:42.081 "bdev_get_bdevs", 00:04:42.081 "bdev_reset_iostat", 00:04:42.081 "bdev_get_iostat", 00:04:42.082 "bdev_examine", 00:04:42.082 "bdev_wait_for_examine", 00:04:42.082 "bdev_set_options", 00:04:42.082 "accel_get_stats", 00:04:42.082 "accel_set_options", 00:04:42.082 "accel_set_driver", 00:04:42.082 "accel_crypto_key_destroy", 00:04:42.082 "accel_crypto_keys_get", 00:04:42.082 "accel_crypto_key_create", 00:04:42.082 "accel_assign_opc", 00:04:42.082 "accel_get_module_info", 00:04:42.082 "accel_get_opc_assignments", 00:04:42.082 "vmd_rescan", 00:04:42.082 "vmd_remove_device", 00:04:42.082 "vmd_enable", 00:04:42.082 "sock_get_default_impl", 00:04:42.082 "sock_set_default_impl", 00:04:42.082 "sock_impl_set_options", 00:04:42.082 "sock_impl_get_options", 00:04:42.082 "iobuf_get_stats", 00:04:42.082 "iobuf_set_options", 00:04:42.082 "keyring_get_keys", 00:04:42.082 "framework_get_pci_devices", 00:04:42.082 "framework_get_config", 00:04:42.082 "framework_get_subsystems", 00:04:42.082 "fsdev_set_opts", 00:04:42.082 "fsdev_get_opts", 00:04:42.082 "trace_get_info", 00:04:42.082 "trace_get_tpoint_group_mask", 00:04:42.082 "trace_disable_tpoint_group", 00:04:42.082 "trace_enable_tpoint_group", 00:04:42.082 "trace_clear_tpoint_mask", 00:04:42.082 "trace_set_tpoint_mask", 00:04:42.082 "notify_get_notifications", 00:04:42.082 "notify_get_types", 00:04:42.082 "spdk_get_version", 00:04:42.082 "rpc_get_methods" 00:04:42.082 ] 00:04:42.082 20:00:14 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:42.082 20:00:14 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:42.082 20:00:14 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:42.342 20:00:14 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:42.342 20:00:14 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 57783 00:04:42.342 20:00:14 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 57783 ']' 00:04:42.342 20:00:14 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 57783 00:04:42.342 20:00:14 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:04:42.342 20:00:14 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:42.342 20:00:14 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57783 00:04:42.342 20:00:14 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:42.342 20:00:14 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:42.342 20:00:14 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57783' 00:04:42.342 killing process with pid 57783 00:04:42.342 20:00:14 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 57783 00:04:42.342 20:00:14 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 57783 00:04:44.879 00:04:44.879 real 0m4.097s 00:04:44.879 user 0m7.261s 00:04:44.879 sys 0m0.622s 00:04:44.879 20:00:16 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:44.879 20:00:16 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:44.879 ************************************ 00:04:44.879 END TEST spdkcli_tcp 00:04:44.879 ************************************ 00:04:44.879 20:00:16 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:44.879 20:00:16 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:44.879 20:00:16 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:44.879 20:00:16 -- common/autotest_common.sh@10 -- # set +x 00:04:44.879 ************************************ 00:04:44.879 START TEST dpdk_mem_utility 00:04:44.879 ************************************ 00:04:44.879 20:00:16 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:44.879 * Looking for test storage... 00:04:44.879 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:04:44.879 20:00:16 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:44.879 20:00:16 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:04:44.879 20:00:16 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:44.879 20:00:16 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:44.880 20:00:16 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:44.880 20:00:16 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:44.880 20:00:16 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:44.880 20:00:16 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:04:44.880 20:00:16 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:04:44.880 20:00:16 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:04:44.880 20:00:16 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:04:44.880 20:00:16 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:04:44.880 20:00:16 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:04:44.880 20:00:16 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:04:44.880 20:00:16 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:44.880 20:00:16 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:04:44.880 20:00:16 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:04:44.880 20:00:16 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:44.880 20:00:16 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:44.880 20:00:16 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:04:44.880 20:00:16 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:04:44.880 20:00:16 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:44.880 20:00:16 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:04:44.880 20:00:16 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:04:44.880 20:00:16 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:04:44.880 20:00:16 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:04:44.880 20:00:16 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:44.880 20:00:16 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:04:44.880 20:00:16 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:04:44.880 20:00:16 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:44.880 20:00:16 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:44.880 20:00:16 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:04:44.880 20:00:16 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:44.880 20:00:16 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:44.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.880 --rc genhtml_branch_coverage=1 00:04:44.880 --rc genhtml_function_coverage=1 00:04:44.880 --rc genhtml_legend=1 00:04:44.880 --rc geninfo_all_blocks=1 00:04:44.880 --rc geninfo_unexecuted_blocks=1 00:04:44.880 00:04:44.880 ' 00:04:44.880 20:00:16 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:44.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.880 --rc genhtml_branch_coverage=1 00:04:44.880 --rc genhtml_function_coverage=1 00:04:44.880 --rc genhtml_legend=1 00:04:44.880 --rc geninfo_all_blocks=1 00:04:44.880 --rc geninfo_unexecuted_blocks=1 00:04:44.880 00:04:44.880 ' 00:04:44.880 20:00:16 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:44.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.880 --rc genhtml_branch_coverage=1 00:04:44.880 --rc genhtml_function_coverage=1 00:04:44.880 --rc genhtml_legend=1 00:04:44.880 --rc geninfo_all_blocks=1 00:04:44.880 --rc geninfo_unexecuted_blocks=1 00:04:44.880 00:04:44.880 ' 00:04:44.880 20:00:16 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:44.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.880 --rc genhtml_branch_coverage=1 00:04:44.880 --rc genhtml_function_coverage=1 00:04:44.880 --rc genhtml_legend=1 00:04:44.880 --rc geninfo_all_blocks=1 00:04:44.880 --rc geninfo_unexecuted_blocks=1 00:04:44.880 00:04:44.880 ' 00:04:44.880 20:00:16 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:44.880 20:00:16 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=57906 00:04:44.880 20:00:16 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:44.880 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:44.880 20:00:16 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 57906 00:04:44.880 20:00:16 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 57906 ']' 00:04:44.880 20:00:16 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:44.880 20:00:16 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:44.880 20:00:16 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:44.880 20:00:16 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:44.880 20:00:16 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:45.138 [2024-12-08 20:00:16.863767] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:04:45.138 [2024-12-08 20:00:16.863986] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57906 ] 00:04:45.138 [2024-12-08 20:00:17.038144] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:45.395 [2024-12-08 20:00:17.145643] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:46.331 20:00:17 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:46.331 20:00:17 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:04:46.331 20:00:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:46.331 20:00:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:46.331 20:00:17 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:46.331 20:00:17 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:46.331 { 00:04:46.331 "filename": "/tmp/spdk_mem_dump.txt" 00:04:46.331 } 00:04:46.331 20:00:17 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:46.331 20:00:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:46.331 DPDK memory size 824.000000 MiB in 1 heap(s) 00:04:46.331 1 heaps totaling size 824.000000 MiB 00:04:46.331 size: 824.000000 MiB heap id: 0 00:04:46.331 end heaps---------- 00:04:46.331 9 mempools totaling size 603.782043 MiB 00:04:46.331 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:46.331 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:46.331 size: 100.555481 MiB name: bdev_io_57906 00:04:46.331 size: 50.003479 MiB name: msgpool_57906 00:04:46.331 size: 36.509338 MiB name: fsdev_io_57906 00:04:46.331 size: 21.763794 MiB name: PDU_Pool 00:04:46.331 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:46.331 size: 4.133484 MiB name: evtpool_57906 00:04:46.331 size: 0.026123 MiB name: Session_Pool 00:04:46.331 end mempools------- 00:04:46.331 6 memzones totaling size 4.142822 MiB 00:04:46.331 size: 1.000366 MiB name: RG_ring_0_57906 00:04:46.331 size: 1.000366 MiB name: RG_ring_1_57906 00:04:46.331 size: 1.000366 MiB name: RG_ring_4_57906 00:04:46.331 size: 1.000366 MiB name: RG_ring_5_57906 00:04:46.331 size: 0.125366 MiB name: RG_ring_2_57906 00:04:46.331 size: 0.015991 MiB name: RG_ring_3_57906 00:04:46.331 end memzones------- 00:04:46.331 20:00:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:04:46.331 heap id: 0 total size: 824.000000 MiB number of busy elements: 313 number of free elements: 18 00:04:46.331 list of free elements. size: 16.781860 MiB 00:04:46.331 element at address: 0x200006400000 with size: 1.995972 MiB 00:04:46.331 element at address: 0x20000a600000 with size: 1.995972 MiB 00:04:46.331 element at address: 0x200003e00000 with size: 1.991028 MiB 00:04:46.331 element at address: 0x200019500040 with size: 0.999939 MiB 00:04:46.331 element at address: 0x200019900040 with size: 0.999939 MiB 00:04:46.331 element at address: 0x200019a00000 with size: 0.999084 MiB 00:04:46.331 element at address: 0x200032600000 with size: 0.994324 MiB 00:04:46.331 element at address: 0x200000400000 with size: 0.992004 MiB 00:04:46.331 element at address: 0x200019200000 with size: 0.959656 MiB 00:04:46.331 element at address: 0x200019d00040 with size: 0.936401 MiB 00:04:46.331 element at address: 0x200000200000 with size: 0.716980 MiB 00:04:46.331 element at address: 0x20001b400000 with size: 0.563171 MiB 00:04:46.331 element at address: 0x200000c00000 with size: 0.489197 MiB 00:04:46.331 element at address: 0x200019600000 with size: 0.487976 MiB 00:04:46.331 element at address: 0x200019e00000 with size: 0.485413 MiB 00:04:46.331 element at address: 0x200012c00000 with size: 0.433472 MiB 00:04:46.331 element at address: 0x200028800000 with size: 0.390442 MiB 00:04:46.331 element at address: 0x200000800000 with size: 0.350891 MiB 00:04:46.331 list of standard malloc elements. size: 199.287231 MiB 00:04:46.331 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:04:46.331 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:04:46.331 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:04:46.331 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:04:46.331 element at address: 0x200019bfff80 with size: 1.000183 MiB 00:04:46.331 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:04:46.332 element at address: 0x200019deff40 with size: 0.062683 MiB 00:04:46.332 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:04:46.332 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:04:46.332 element at address: 0x200019defdc0 with size: 0.000366 MiB 00:04:46.332 element at address: 0x200012bff040 with size: 0.000305 MiB 00:04:46.332 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:04:46.332 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:04:46.332 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:04:46.332 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:04:46.332 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:04:46.332 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:04:46.332 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:04:46.332 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:04:46.332 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:04:46.332 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:04:46.332 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:04:46.332 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:04:46.332 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:04:46.332 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:04:46.332 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:04:46.332 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:04:46.332 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:04:46.332 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:04:46.332 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:04:46.332 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:04:46.332 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:04:46.332 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:04:46.332 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:04:46.332 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:04:46.332 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:04:46.332 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:04:46.332 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:04:46.332 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:04:46.332 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:04:46.332 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:04:46.332 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:04:46.332 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:04:46.332 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:04:46.332 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:04:46.332 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:04:46.332 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:04:46.332 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:04:46.332 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:04:46.332 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:04:46.332 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:04:46.332 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:04:46.332 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:04:46.332 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:04:46.332 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:04:46.332 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:04:46.332 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:04:46.332 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:04:46.332 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:04:46.332 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:04:46.332 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:04:46.332 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:04:46.332 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:04:46.332 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:04:46.332 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:04:46.332 element at address: 0x200000c7d3c0 with size: 0.000244 MiB 00:04:46.332 element at address: 0x200000c7d4c0 with size: 0.000244 MiB 00:04:46.332 element at address: 0x200000c7d5c0 with size: 0.000244 MiB 00:04:46.332 element at address: 0x200000c7d6c0 with size: 0.000244 MiB 00:04:46.332 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:04:46.332 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:04:46.332 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:04:46.332 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:04:46.332 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:04:46.332 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:04:46.332 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:04:46.332 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:04:46.332 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:04:46.332 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:04:46.332 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:04:46.332 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:04:46.332 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:04:46.332 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:04:46.332 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:04:46.332 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:04:46.332 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:04:46.332 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:04:46.332 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:04:46.332 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:04:46.332 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:04:46.332 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:04:46.332 element at address: 0x200000cff000 with size: 0.000244 MiB 00:04:46.332 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:04:46.332 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:04:46.332 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:04:46.332 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:04:46.332 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:04:46.332 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:04:46.332 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:04:46.332 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:04:46.332 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:04:46.332 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:04:46.332 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:04:46.332 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:04:46.332 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:04:46.332 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:04:46.332 element at address: 0x200012bff180 with size: 0.000244 MiB 00:04:46.332 element at address: 0x200012bff280 with size: 0.000244 MiB 00:04:46.332 element at address: 0x200012bff380 with size: 0.000244 MiB 00:04:46.332 element at address: 0x200012bff480 with size: 0.000244 MiB 00:04:46.332 element at address: 0x200012bff580 with size: 0.000244 MiB 00:04:46.332 element at address: 0x200012bff680 with size: 0.000244 MiB 00:04:46.332 element at address: 0x200012bff780 with size: 0.000244 MiB 00:04:46.332 element at address: 0x200012bff880 with size: 0.000244 MiB 00:04:46.332 element at address: 0x200012bff980 with size: 0.000244 MiB 00:04:46.332 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:04:46.332 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:04:46.332 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:04:46.332 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:04:46.332 element at address: 0x200012c6ef80 with size: 0.000244 MiB 00:04:46.332 element at address: 0x200012c6f080 with size: 0.000244 MiB 00:04:46.332 element at address: 0x200012c6f180 with size: 0.000244 MiB 00:04:46.332 element at address: 0x200012c6f280 with size: 0.000244 MiB 00:04:46.332 element at address: 0x200012c6f380 with size: 0.000244 MiB 00:04:46.332 element at address: 0x200012c6f480 with size: 0.000244 MiB 00:04:46.332 element at address: 0x200012c6f580 with size: 0.000244 MiB 00:04:46.332 element at address: 0x200012c6f680 with size: 0.000244 MiB 00:04:46.332 element at address: 0x200012c6f780 with size: 0.000244 MiB 00:04:46.332 element at address: 0x200012c6f880 with size: 0.000244 MiB 00:04:46.332 element at address: 0x200012cefbc0 with size: 0.000244 MiB 00:04:46.332 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:04:46.332 element at address: 0x20001967cec0 with size: 0.000244 MiB 00:04:46.332 element at address: 0x20001967cfc0 with size: 0.000244 MiB 00:04:46.332 element at address: 0x20001967d0c0 with size: 0.000244 MiB 00:04:46.332 element at address: 0x20001967d1c0 with size: 0.000244 MiB 00:04:46.332 element at address: 0x20001967d2c0 with size: 0.000244 MiB 00:04:46.332 element at address: 0x20001967d3c0 with size: 0.000244 MiB 00:04:46.332 element at address: 0x20001967d4c0 with size: 0.000244 MiB 00:04:46.332 element at address: 0x20001967d5c0 with size: 0.000244 MiB 00:04:46.332 element at address: 0x20001967d6c0 with size: 0.000244 MiB 00:04:46.332 element at address: 0x20001967d7c0 with size: 0.000244 MiB 00:04:46.332 element at address: 0x20001967d8c0 with size: 0.000244 MiB 00:04:46.332 element at address: 0x20001967d9c0 with size: 0.000244 MiB 00:04:46.332 element at address: 0x2000196fdd00 with size: 0.000244 MiB 00:04:46.332 element at address: 0x200019affc40 with size: 0.000244 MiB 00:04:46.332 element at address: 0x200019defbc0 with size: 0.000244 MiB 00:04:46.332 element at address: 0x200019defcc0 with size: 0.000244 MiB 00:04:46.332 element at address: 0x200019ebc680 with size: 0.000244 MiB 00:04:46.332 element at address: 0x20001b4902c0 with size: 0.000244 MiB 00:04:46.332 element at address: 0x20001b4903c0 with size: 0.000244 MiB 00:04:46.332 element at address: 0x20001b4904c0 with size: 0.000244 MiB 00:04:46.332 element at address: 0x20001b4905c0 with size: 0.000244 MiB 00:04:46.332 element at address: 0x20001b4906c0 with size: 0.000244 MiB 00:04:46.332 element at address: 0x20001b4907c0 with size: 0.000244 MiB 00:04:46.332 element at address: 0x20001b4908c0 with size: 0.000244 MiB 00:04:46.332 element at address: 0x20001b4909c0 with size: 0.000244 MiB 00:04:46.332 element at address: 0x20001b490ac0 with size: 0.000244 MiB 00:04:46.332 element at address: 0x20001b490bc0 with size: 0.000244 MiB 00:04:46.332 element at address: 0x20001b490cc0 with size: 0.000244 MiB 00:04:46.332 element at address: 0x20001b490dc0 with size: 0.000244 MiB 00:04:46.332 element at address: 0x20001b490ec0 with size: 0.000244 MiB 00:04:46.332 element at address: 0x20001b490fc0 with size: 0.000244 MiB 00:04:46.332 element at address: 0x20001b4910c0 with size: 0.000244 MiB 00:04:46.332 element at address: 0x20001b4911c0 with size: 0.000244 MiB 00:04:46.332 element at address: 0x20001b4912c0 with size: 0.000244 MiB 00:04:46.332 element at address: 0x20001b4913c0 with size: 0.000244 MiB 00:04:46.332 element at address: 0x20001b4914c0 with size: 0.000244 MiB 00:04:46.332 element at address: 0x20001b4915c0 with size: 0.000244 MiB 00:04:46.332 element at address: 0x20001b4916c0 with size: 0.000244 MiB 00:04:46.332 element at address: 0x20001b4917c0 with size: 0.000244 MiB 00:04:46.332 element at address: 0x20001b4918c0 with size: 0.000244 MiB 00:04:46.332 element at address: 0x20001b4919c0 with size: 0.000244 MiB 00:04:46.332 element at address: 0x20001b491ac0 with size: 0.000244 MiB 00:04:46.332 element at address: 0x20001b491bc0 with size: 0.000244 MiB 00:04:46.332 element at address: 0x20001b491cc0 with size: 0.000244 MiB 00:04:46.332 element at address: 0x20001b491dc0 with size: 0.000244 MiB 00:04:46.332 element at address: 0x20001b491ec0 with size: 0.000244 MiB 00:04:46.332 element at address: 0x20001b491fc0 with size: 0.000244 MiB 00:04:46.332 element at address: 0x20001b4920c0 with size: 0.000244 MiB 00:04:46.332 element at address: 0x20001b4921c0 with size: 0.000244 MiB 00:04:46.332 element at address: 0x20001b4922c0 with size: 0.000244 MiB 00:04:46.332 element at address: 0x20001b4923c0 with size: 0.000244 MiB 00:04:46.332 element at address: 0x20001b4924c0 with size: 0.000244 MiB 00:04:46.332 element at address: 0x20001b4925c0 with size: 0.000244 MiB 00:04:46.332 element at address: 0x20001b4926c0 with size: 0.000244 MiB 00:04:46.332 element at address: 0x20001b4927c0 with size: 0.000244 MiB 00:04:46.332 element at address: 0x20001b4928c0 with size: 0.000244 MiB 00:04:46.332 element at address: 0x20001b4929c0 with size: 0.000244 MiB 00:04:46.332 element at address: 0x20001b492ac0 with size: 0.000244 MiB 00:04:46.332 element at address: 0x20001b492bc0 with size: 0.000244 MiB 00:04:46.332 element at address: 0x20001b492cc0 with size: 0.000244 MiB 00:04:46.332 element at address: 0x20001b492dc0 with size: 0.000244 MiB 00:04:46.332 element at address: 0x20001b492ec0 with size: 0.000244 MiB 00:04:46.332 element at address: 0x20001b492fc0 with size: 0.000244 MiB 00:04:46.332 element at address: 0x20001b4930c0 with size: 0.000244 MiB 00:04:46.332 element at address: 0x20001b4931c0 with size: 0.000244 MiB 00:04:46.332 element at address: 0x20001b4932c0 with size: 0.000244 MiB 00:04:46.332 element at address: 0x20001b4933c0 with size: 0.000244 MiB 00:04:46.332 element at address: 0x20001b4934c0 with size: 0.000244 MiB 00:04:46.332 element at address: 0x20001b4935c0 with size: 0.000244 MiB 00:04:46.332 element at address: 0x20001b4936c0 with size: 0.000244 MiB 00:04:46.332 element at address: 0x20001b4937c0 with size: 0.000244 MiB 00:04:46.332 element at address: 0x20001b4938c0 with size: 0.000244 MiB 00:04:46.332 element at address: 0x20001b4939c0 with size: 0.000244 MiB 00:04:46.332 element at address: 0x20001b493ac0 with size: 0.000244 MiB 00:04:46.332 element at address: 0x20001b493bc0 with size: 0.000244 MiB 00:04:46.332 element at address: 0x20001b493cc0 with size: 0.000244 MiB 00:04:46.332 element at address: 0x20001b493dc0 with size: 0.000244 MiB 00:04:46.332 element at address: 0x20001b493ec0 with size: 0.000244 MiB 00:04:46.332 element at address: 0x20001b493fc0 with size: 0.000244 MiB 00:04:46.332 element at address: 0x20001b4940c0 with size: 0.000244 MiB 00:04:46.332 element at address: 0x20001b4941c0 with size: 0.000244 MiB 00:04:46.332 element at address: 0x20001b4942c0 with size: 0.000244 MiB 00:04:46.332 element at address: 0x20001b4943c0 with size: 0.000244 MiB 00:04:46.332 element at address: 0x20001b4944c0 with size: 0.000244 MiB 00:04:46.332 element at address: 0x20001b4945c0 with size: 0.000244 MiB 00:04:46.332 element at address: 0x20001b4946c0 with size: 0.000244 MiB 00:04:46.332 element at address: 0x20001b4947c0 with size: 0.000244 MiB 00:04:46.332 element at address: 0x20001b4948c0 with size: 0.000244 MiB 00:04:46.332 element at address: 0x20001b4949c0 with size: 0.000244 MiB 00:04:46.332 element at address: 0x20001b494ac0 with size: 0.000244 MiB 00:04:46.332 element at address: 0x20001b494bc0 with size: 0.000244 MiB 00:04:46.332 element at address: 0x20001b494cc0 with size: 0.000244 MiB 00:04:46.332 element at address: 0x20001b494dc0 with size: 0.000244 MiB 00:04:46.332 element at address: 0x20001b494ec0 with size: 0.000244 MiB 00:04:46.332 element at address: 0x20001b494fc0 with size: 0.000244 MiB 00:04:46.332 element at address: 0x20001b4950c0 with size: 0.000244 MiB 00:04:46.332 element at address: 0x20001b4951c0 with size: 0.000244 MiB 00:04:46.332 element at address: 0x20001b4952c0 with size: 0.000244 MiB 00:04:46.332 element at address: 0x20001b4953c0 with size: 0.000244 MiB 00:04:46.332 element at address: 0x200028863f40 with size: 0.000244 MiB 00:04:46.332 element at address: 0x200028864040 with size: 0.000244 MiB 00:04:46.332 element at address: 0x20002886ad00 with size: 0.000244 MiB 00:04:46.332 element at address: 0x20002886af80 with size: 0.000244 MiB 00:04:46.333 element at address: 0x20002886b080 with size: 0.000244 MiB 00:04:46.333 element at address: 0x20002886b180 with size: 0.000244 MiB 00:04:46.333 element at address: 0x20002886b280 with size: 0.000244 MiB 00:04:46.333 element at address: 0x20002886b380 with size: 0.000244 MiB 00:04:46.333 element at address: 0x20002886b480 with size: 0.000244 MiB 00:04:46.333 element at address: 0x20002886b580 with size: 0.000244 MiB 00:04:46.333 element at address: 0x20002886b680 with size: 0.000244 MiB 00:04:46.333 element at address: 0x20002886b780 with size: 0.000244 MiB 00:04:46.333 element at address: 0x20002886b880 with size: 0.000244 MiB 00:04:46.333 element at address: 0x20002886b980 with size: 0.000244 MiB 00:04:46.333 element at address: 0x20002886ba80 with size: 0.000244 MiB 00:04:46.333 element at address: 0x20002886bb80 with size: 0.000244 MiB 00:04:46.333 element at address: 0x20002886bc80 with size: 0.000244 MiB 00:04:46.333 element at address: 0x20002886bd80 with size: 0.000244 MiB 00:04:46.333 element at address: 0x20002886be80 with size: 0.000244 MiB 00:04:46.333 element at address: 0x20002886bf80 with size: 0.000244 MiB 00:04:46.333 element at address: 0x20002886c080 with size: 0.000244 MiB 00:04:46.333 element at address: 0x20002886c180 with size: 0.000244 MiB 00:04:46.333 element at address: 0x20002886c280 with size: 0.000244 MiB 00:04:46.333 element at address: 0x20002886c380 with size: 0.000244 MiB 00:04:46.333 element at address: 0x20002886c480 with size: 0.000244 MiB 00:04:46.333 element at address: 0x20002886c580 with size: 0.000244 MiB 00:04:46.333 element at address: 0x20002886c680 with size: 0.000244 MiB 00:04:46.333 element at address: 0x20002886c780 with size: 0.000244 MiB 00:04:46.333 element at address: 0x20002886c880 with size: 0.000244 MiB 00:04:46.333 element at address: 0x20002886c980 with size: 0.000244 MiB 00:04:46.333 element at address: 0x20002886ca80 with size: 0.000244 MiB 00:04:46.333 element at address: 0x20002886cb80 with size: 0.000244 MiB 00:04:46.333 element at address: 0x20002886cc80 with size: 0.000244 MiB 00:04:46.333 element at address: 0x20002886cd80 with size: 0.000244 MiB 00:04:46.333 element at address: 0x20002886ce80 with size: 0.000244 MiB 00:04:46.333 element at address: 0x20002886cf80 with size: 0.000244 MiB 00:04:46.333 element at address: 0x20002886d080 with size: 0.000244 MiB 00:04:46.333 element at address: 0x20002886d180 with size: 0.000244 MiB 00:04:46.333 element at address: 0x20002886d280 with size: 0.000244 MiB 00:04:46.333 element at address: 0x20002886d380 with size: 0.000244 MiB 00:04:46.333 element at address: 0x20002886d480 with size: 0.000244 MiB 00:04:46.333 element at address: 0x20002886d580 with size: 0.000244 MiB 00:04:46.333 element at address: 0x20002886d680 with size: 0.000244 MiB 00:04:46.333 element at address: 0x20002886d780 with size: 0.000244 MiB 00:04:46.333 element at address: 0x20002886d880 with size: 0.000244 MiB 00:04:46.333 element at address: 0x20002886d980 with size: 0.000244 MiB 00:04:46.333 element at address: 0x20002886da80 with size: 0.000244 MiB 00:04:46.333 element at address: 0x20002886db80 with size: 0.000244 MiB 00:04:46.333 element at address: 0x20002886dc80 with size: 0.000244 MiB 00:04:46.333 element at address: 0x20002886dd80 with size: 0.000244 MiB 00:04:46.333 element at address: 0x20002886de80 with size: 0.000244 MiB 00:04:46.333 element at address: 0x20002886df80 with size: 0.000244 MiB 00:04:46.333 element at address: 0x20002886e080 with size: 0.000244 MiB 00:04:46.333 element at address: 0x20002886e180 with size: 0.000244 MiB 00:04:46.333 element at address: 0x20002886e280 with size: 0.000244 MiB 00:04:46.333 element at address: 0x20002886e380 with size: 0.000244 MiB 00:04:46.333 element at address: 0x20002886e480 with size: 0.000244 MiB 00:04:46.333 element at address: 0x20002886e580 with size: 0.000244 MiB 00:04:46.333 element at address: 0x20002886e680 with size: 0.000244 MiB 00:04:46.333 element at address: 0x20002886e780 with size: 0.000244 MiB 00:04:46.333 element at address: 0x20002886e880 with size: 0.000244 MiB 00:04:46.333 element at address: 0x20002886e980 with size: 0.000244 MiB 00:04:46.333 element at address: 0x20002886ea80 with size: 0.000244 MiB 00:04:46.333 element at address: 0x20002886eb80 with size: 0.000244 MiB 00:04:46.333 element at address: 0x20002886ec80 with size: 0.000244 MiB 00:04:46.333 element at address: 0x20002886ed80 with size: 0.000244 MiB 00:04:46.333 element at address: 0x20002886ee80 with size: 0.000244 MiB 00:04:46.333 element at address: 0x20002886ef80 with size: 0.000244 MiB 00:04:46.333 element at address: 0x20002886f080 with size: 0.000244 MiB 00:04:46.333 element at address: 0x20002886f180 with size: 0.000244 MiB 00:04:46.333 element at address: 0x20002886f280 with size: 0.000244 MiB 00:04:46.333 element at address: 0x20002886f380 with size: 0.000244 MiB 00:04:46.333 element at address: 0x20002886f480 with size: 0.000244 MiB 00:04:46.333 element at address: 0x20002886f580 with size: 0.000244 MiB 00:04:46.333 element at address: 0x20002886f680 with size: 0.000244 MiB 00:04:46.333 element at address: 0x20002886f780 with size: 0.000244 MiB 00:04:46.333 element at address: 0x20002886f880 with size: 0.000244 MiB 00:04:46.333 element at address: 0x20002886f980 with size: 0.000244 MiB 00:04:46.333 element at address: 0x20002886fa80 with size: 0.000244 MiB 00:04:46.333 element at address: 0x20002886fb80 with size: 0.000244 MiB 00:04:46.333 element at address: 0x20002886fc80 with size: 0.000244 MiB 00:04:46.333 element at address: 0x20002886fd80 with size: 0.000244 MiB 00:04:46.333 element at address: 0x20002886fe80 with size: 0.000244 MiB 00:04:46.333 list of memzone associated elements. size: 607.930908 MiB 00:04:46.333 element at address: 0x20001b4954c0 with size: 211.416809 MiB 00:04:46.333 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:46.333 element at address: 0x20002886ff80 with size: 157.562622 MiB 00:04:46.333 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:46.333 element at address: 0x200012df1e40 with size: 100.055115 MiB 00:04:46.333 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_57906_0 00:04:46.333 element at address: 0x200000dff340 with size: 48.003113 MiB 00:04:46.333 associated memzone info: size: 48.002930 MiB name: MP_msgpool_57906_0 00:04:46.333 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:04:46.333 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_57906_0 00:04:46.333 element at address: 0x200019fbe900 with size: 20.255615 MiB 00:04:46.333 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:46.333 element at address: 0x2000327feb00 with size: 18.005127 MiB 00:04:46.333 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:46.333 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:04:46.333 associated memzone info: size: 3.000122 MiB name: MP_evtpool_57906_0 00:04:46.333 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:04:46.333 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_57906 00:04:46.333 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:04:46.333 associated memzone info: size: 1.007996 MiB name: MP_evtpool_57906 00:04:46.333 element at address: 0x2000196fde00 with size: 1.008179 MiB 00:04:46.333 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:46.333 element at address: 0x200019ebc780 with size: 1.008179 MiB 00:04:46.333 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:46.333 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:04:46.333 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:46.333 element at address: 0x200012cefcc0 with size: 1.008179 MiB 00:04:46.333 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:46.333 element at address: 0x200000cff100 with size: 1.000549 MiB 00:04:46.333 associated memzone info: size: 1.000366 MiB name: RG_ring_0_57906 00:04:46.333 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:04:46.333 associated memzone info: size: 1.000366 MiB name: RG_ring_1_57906 00:04:46.333 element at address: 0x200019affd40 with size: 1.000549 MiB 00:04:46.333 associated memzone info: size: 1.000366 MiB name: RG_ring_4_57906 00:04:46.333 element at address: 0x2000326fe8c0 with size: 1.000549 MiB 00:04:46.333 associated memzone info: size: 1.000366 MiB name: RG_ring_5_57906 00:04:46.333 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:04:46.333 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_57906 00:04:46.333 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:04:46.333 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_57906 00:04:46.333 element at address: 0x20001967dac0 with size: 0.500549 MiB 00:04:46.333 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:46.333 element at address: 0x200012c6f980 with size: 0.500549 MiB 00:04:46.333 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:46.333 element at address: 0x200019e7c440 with size: 0.250549 MiB 00:04:46.333 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:46.333 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:04:46.333 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_57906 00:04:46.333 element at address: 0x20000085df80 with size: 0.125549 MiB 00:04:46.333 associated memzone info: size: 0.125366 MiB name: RG_ring_2_57906 00:04:46.333 element at address: 0x2000192f5ac0 with size: 0.031799 MiB 00:04:46.333 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:46.333 element at address: 0x200028864140 with size: 0.023804 MiB 00:04:46.333 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:46.333 element at address: 0x200000859d40 with size: 0.016174 MiB 00:04:46.333 associated memzone info: size: 0.015991 MiB name: RG_ring_3_57906 00:04:46.333 element at address: 0x20002886a2c0 with size: 0.002502 MiB 00:04:46.333 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:46.333 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:04:46.333 associated memzone info: size: 0.000183 MiB name: MP_msgpool_57906 00:04:46.333 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:04:46.333 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_57906 00:04:46.333 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:04:46.333 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_57906 00:04:46.333 element at address: 0x20002886ae00 with size: 0.000366 MiB 00:04:46.333 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:46.333 20:00:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:46.333 20:00:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 57906 00:04:46.333 20:00:18 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 57906 ']' 00:04:46.333 20:00:18 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 57906 00:04:46.333 20:00:18 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:04:46.333 20:00:18 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:46.333 20:00:18 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57906 00:04:46.333 20:00:18 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:46.333 20:00:18 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:46.333 20:00:18 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57906' 00:04:46.333 killing process with pid 57906 00:04:46.333 20:00:18 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 57906 00:04:46.333 20:00:18 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 57906 00:04:48.871 00:04:48.871 real 0m3.944s 00:04:48.871 user 0m3.832s 00:04:48.871 sys 0m0.560s 00:04:48.872 20:00:20 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:48.872 20:00:20 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:48.872 ************************************ 00:04:48.872 END TEST dpdk_mem_utility 00:04:48.872 ************************************ 00:04:48.872 20:00:20 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:48.872 20:00:20 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:48.872 20:00:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:48.872 20:00:20 -- common/autotest_common.sh@10 -- # set +x 00:04:48.872 ************************************ 00:04:48.872 START TEST event 00:04:48.872 ************************************ 00:04:48.872 20:00:20 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:48.872 * Looking for test storage... 00:04:48.872 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:04:48.872 20:00:20 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:48.872 20:00:20 event -- common/autotest_common.sh@1711 -- # lcov --version 00:04:48.872 20:00:20 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:48.872 20:00:20 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:48.872 20:00:20 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:48.872 20:00:20 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:48.872 20:00:20 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:48.872 20:00:20 event -- scripts/common.sh@336 -- # IFS=.-: 00:04:48.872 20:00:20 event -- scripts/common.sh@336 -- # read -ra ver1 00:04:48.872 20:00:20 event -- scripts/common.sh@337 -- # IFS=.-: 00:04:48.872 20:00:20 event -- scripts/common.sh@337 -- # read -ra ver2 00:04:48.872 20:00:20 event -- scripts/common.sh@338 -- # local 'op=<' 00:04:48.872 20:00:20 event -- scripts/common.sh@340 -- # ver1_l=2 00:04:48.872 20:00:20 event -- scripts/common.sh@341 -- # ver2_l=1 00:04:48.872 20:00:20 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:48.872 20:00:20 event -- scripts/common.sh@344 -- # case "$op" in 00:04:48.872 20:00:20 event -- scripts/common.sh@345 -- # : 1 00:04:48.872 20:00:20 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:48.872 20:00:20 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:48.872 20:00:20 event -- scripts/common.sh@365 -- # decimal 1 00:04:48.872 20:00:20 event -- scripts/common.sh@353 -- # local d=1 00:04:48.872 20:00:20 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:48.872 20:00:20 event -- scripts/common.sh@355 -- # echo 1 00:04:48.872 20:00:20 event -- scripts/common.sh@365 -- # ver1[v]=1 00:04:48.872 20:00:20 event -- scripts/common.sh@366 -- # decimal 2 00:04:48.872 20:00:20 event -- scripts/common.sh@353 -- # local d=2 00:04:48.872 20:00:20 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:48.872 20:00:20 event -- scripts/common.sh@355 -- # echo 2 00:04:48.872 20:00:20 event -- scripts/common.sh@366 -- # ver2[v]=2 00:04:48.872 20:00:20 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:48.872 20:00:20 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:48.872 20:00:20 event -- scripts/common.sh@368 -- # return 0 00:04:48.872 20:00:20 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:48.872 20:00:20 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:48.872 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.872 --rc genhtml_branch_coverage=1 00:04:48.872 --rc genhtml_function_coverage=1 00:04:48.872 --rc genhtml_legend=1 00:04:48.872 --rc geninfo_all_blocks=1 00:04:48.872 --rc geninfo_unexecuted_blocks=1 00:04:48.872 00:04:48.872 ' 00:04:48.872 20:00:20 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:48.872 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.872 --rc genhtml_branch_coverage=1 00:04:48.872 --rc genhtml_function_coverage=1 00:04:48.872 --rc genhtml_legend=1 00:04:48.872 --rc geninfo_all_blocks=1 00:04:48.872 --rc geninfo_unexecuted_blocks=1 00:04:48.872 00:04:48.872 ' 00:04:48.872 20:00:20 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:48.872 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.872 --rc genhtml_branch_coverage=1 00:04:48.872 --rc genhtml_function_coverage=1 00:04:48.872 --rc genhtml_legend=1 00:04:48.872 --rc geninfo_all_blocks=1 00:04:48.872 --rc geninfo_unexecuted_blocks=1 00:04:48.872 00:04:48.872 ' 00:04:48.872 20:00:20 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:48.872 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.872 --rc genhtml_branch_coverage=1 00:04:48.872 --rc genhtml_function_coverage=1 00:04:48.872 --rc genhtml_legend=1 00:04:48.872 --rc geninfo_all_blocks=1 00:04:48.872 --rc geninfo_unexecuted_blocks=1 00:04:48.872 00:04:48.872 ' 00:04:48.872 20:00:20 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:04:48.872 20:00:20 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:48.872 20:00:20 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:48.872 20:00:20 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:04:48.872 20:00:20 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:48.872 20:00:20 event -- common/autotest_common.sh@10 -- # set +x 00:04:48.872 ************************************ 00:04:48.872 START TEST event_perf 00:04:48.872 ************************************ 00:04:48.872 20:00:20 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:48.872 Running I/O for 1 seconds...[2024-12-08 20:00:20.828234] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:04:48.872 [2024-12-08 20:00:20.828390] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58014 ] 00:04:49.132 [2024-12-08 20:00:21.002587] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:49.391 [2024-12-08 20:00:21.120108] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:49.391 [2024-12-08 20:00:21.120232] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:49.391 [2024-12-08 20:00:21.120357] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:49.391 [2024-12-08 20:00:21.120400] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:50.770 Running I/O for 1 seconds... 00:04:50.770 lcore 0: 211559 00:04:50.770 lcore 1: 211558 00:04:50.770 lcore 2: 211558 00:04:50.770 lcore 3: 211557 00:04:50.770 done. 00:04:50.770 00:04:50.770 real 0m1.570s 00:04:50.770 user 0m4.336s 00:04:50.770 sys 0m0.114s 00:04:50.770 ************************************ 00:04:50.770 END TEST event_perf 00:04:50.770 ************************************ 00:04:50.770 20:00:22 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:50.770 20:00:22 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:50.770 20:00:22 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:50.770 20:00:22 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:50.770 20:00:22 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:50.770 20:00:22 event -- common/autotest_common.sh@10 -- # set +x 00:04:50.770 ************************************ 00:04:50.770 START TEST event_reactor 00:04:50.770 ************************************ 00:04:50.770 20:00:22 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:50.770 [2024-12-08 20:00:22.463394] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:04:50.770 [2024-12-08 20:00:22.463532] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58048 ] 00:04:50.770 [2024-12-08 20:00:22.633439] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:50.770 [2024-12-08 20:00:22.738466] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.154 test_start 00:04:52.154 oneshot 00:04:52.154 tick 100 00:04:52.154 tick 100 00:04:52.154 tick 250 00:04:52.154 tick 100 00:04:52.154 tick 100 00:04:52.154 tick 100 00:04:52.154 tick 250 00:04:52.154 tick 500 00:04:52.154 tick 100 00:04:52.154 tick 100 00:04:52.154 tick 250 00:04:52.154 tick 100 00:04:52.154 tick 100 00:04:52.154 test_end 00:04:52.154 00:04:52.154 real 0m1.541s 00:04:52.154 user 0m1.343s 00:04:52.154 sys 0m0.090s 00:04:52.154 ************************************ 00:04:52.154 END TEST event_reactor 00:04:52.154 ************************************ 00:04:52.154 20:00:23 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:52.154 20:00:23 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:52.154 20:00:24 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:52.154 20:00:24 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:52.154 20:00:24 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:52.154 20:00:24 event -- common/autotest_common.sh@10 -- # set +x 00:04:52.154 ************************************ 00:04:52.154 START TEST event_reactor_perf 00:04:52.154 ************************************ 00:04:52.154 20:00:24 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:52.154 [2024-12-08 20:00:24.074593] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:04:52.154 [2024-12-08 20:00:24.074700] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58090 ] 00:04:52.414 [2024-12-08 20:00:24.243740] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:52.414 [2024-12-08 20:00:24.353452] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:53.795 test_start 00:04:53.795 test_end 00:04:53.795 Performance: 396698 events per second 00:04:53.795 00:04:53.795 real 0m1.544s 00:04:53.795 user 0m1.339s 00:04:53.795 sys 0m0.096s 00:04:53.795 20:00:25 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:53.795 20:00:25 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:53.795 ************************************ 00:04:53.795 END TEST event_reactor_perf 00:04:53.795 ************************************ 00:04:53.795 20:00:25 event -- event/event.sh@49 -- # uname -s 00:04:53.795 20:00:25 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:53.795 20:00:25 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:04:53.795 20:00:25 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:53.795 20:00:25 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:53.795 20:00:25 event -- common/autotest_common.sh@10 -- # set +x 00:04:53.795 ************************************ 00:04:53.795 START TEST event_scheduler 00:04:53.795 ************************************ 00:04:53.795 20:00:25 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:04:53.795 * Looking for test storage... 00:04:54.055 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:04:54.055 20:00:25 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:54.055 20:00:25 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:04:54.055 20:00:25 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:54.055 20:00:25 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:54.055 20:00:25 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:54.055 20:00:25 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:54.055 20:00:25 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:54.055 20:00:25 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:04:54.055 20:00:25 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:04:54.055 20:00:25 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:04:54.055 20:00:25 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:04:54.055 20:00:25 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:04:54.055 20:00:25 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:04:54.055 20:00:25 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:04:54.055 20:00:25 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:54.055 20:00:25 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:04:54.055 20:00:25 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:04:54.055 20:00:25 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:54.055 20:00:25 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:54.055 20:00:25 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:04:54.055 20:00:25 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:04:54.055 20:00:25 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:54.055 20:00:25 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:04:54.055 20:00:25 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:04:54.055 20:00:25 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:04:54.055 20:00:25 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:04:54.055 20:00:25 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:54.055 20:00:25 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:04:54.055 20:00:25 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:04:54.055 20:00:25 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:54.055 20:00:25 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:54.055 20:00:25 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:04:54.055 20:00:25 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:54.055 20:00:25 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:54.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.055 --rc genhtml_branch_coverage=1 00:04:54.055 --rc genhtml_function_coverage=1 00:04:54.055 --rc genhtml_legend=1 00:04:54.055 --rc geninfo_all_blocks=1 00:04:54.055 --rc geninfo_unexecuted_blocks=1 00:04:54.055 00:04:54.055 ' 00:04:54.055 20:00:25 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:54.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.055 --rc genhtml_branch_coverage=1 00:04:54.055 --rc genhtml_function_coverage=1 00:04:54.055 --rc genhtml_legend=1 00:04:54.055 --rc geninfo_all_blocks=1 00:04:54.055 --rc geninfo_unexecuted_blocks=1 00:04:54.055 00:04:54.055 ' 00:04:54.055 20:00:25 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:54.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.055 --rc genhtml_branch_coverage=1 00:04:54.055 --rc genhtml_function_coverage=1 00:04:54.055 --rc genhtml_legend=1 00:04:54.055 --rc geninfo_all_blocks=1 00:04:54.055 --rc geninfo_unexecuted_blocks=1 00:04:54.055 00:04:54.055 ' 00:04:54.055 20:00:25 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:54.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.055 --rc genhtml_branch_coverage=1 00:04:54.055 --rc genhtml_function_coverage=1 00:04:54.055 --rc genhtml_legend=1 00:04:54.055 --rc geninfo_all_blocks=1 00:04:54.055 --rc geninfo_unexecuted_blocks=1 00:04:54.055 00:04:54.055 ' 00:04:54.055 20:00:25 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:54.055 20:00:25 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58160 00:04:54.055 20:00:25 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:54.055 20:00:25 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:54.055 20:00:25 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58160 00:04:54.055 20:00:25 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 58160 ']' 00:04:54.055 20:00:25 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:54.055 20:00:25 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:54.055 20:00:25 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:54.055 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:54.055 20:00:25 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:54.055 20:00:25 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:54.055 [2024-12-08 20:00:25.956723] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:04:54.055 [2024-12-08 20:00:25.956935] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58160 ] 00:04:54.315 [2024-12-08 20:00:26.136775] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:54.315 [2024-12-08 20:00:26.250573] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:54.315 [2024-12-08 20:00:26.250887] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:54.315 [2024-12-08 20:00:26.250922] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:54.315 [2024-12-08 20:00:26.250744] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:54.883 20:00:26 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:54.883 20:00:26 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:04:54.883 20:00:26 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:54.883 20:00:26 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:54.883 20:00:26 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:54.883 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:54.883 POWER: Cannot set governor of lcore 0 to userspace 00:04:54.883 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:54.883 POWER: Cannot set governor of lcore 0 to performance 00:04:54.883 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:54.883 POWER: Cannot set governor of lcore 0 to userspace 00:04:54.883 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:54.883 POWER: Cannot set governor of lcore 0 to userspace 00:04:54.883 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:04:54.883 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:04:54.883 POWER: Unable to set Power Management Environment for lcore 0 00:04:54.883 [2024-12-08 20:00:26.787518] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:04:54.883 [2024-12-08 20:00:26.787582] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:04:54.883 [2024-12-08 20:00:26.787616] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:04:54.883 [2024-12-08 20:00:26.787658] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:54.883 [2024-12-08 20:00:26.787690] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:54.883 [2024-12-08 20:00:26.787718] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:54.883 20:00:26 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:54.883 20:00:26 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:54.883 20:00:26 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:54.883 20:00:26 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:55.143 [2024-12-08 20:00:27.110567] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:55.143 20:00:27 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:55.143 20:00:27 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:55.143 20:00:27 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:55.143 20:00:27 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:55.143 20:00:27 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:55.402 ************************************ 00:04:55.402 START TEST scheduler_create_thread 00:04:55.402 ************************************ 00:04:55.402 20:00:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:04:55.402 20:00:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:55.402 20:00:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:55.402 20:00:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:55.402 2 00:04:55.402 20:00:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:55.402 20:00:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:55.402 20:00:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:55.402 20:00:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:55.402 3 00:04:55.402 20:00:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:55.402 20:00:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:55.402 20:00:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:55.402 20:00:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:55.402 4 00:04:55.402 20:00:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:55.402 20:00:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:55.402 20:00:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:55.402 20:00:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:55.402 5 00:04:55.402 20:00:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:55.402 20:00:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:55.402 20:00:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:55.402 20:00:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:55.402 6 00:04:55.402 20:00:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:55.402 20:00:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:55.402 20:00:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:55.402 20:00:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:55.402 7 00:04:55.402 20:00:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:55.402 20:00:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:55.402 20:00:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:55.402 20:00:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:55.402 8 00:04:55.402 20:00:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:55.402 20:00:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:55.402 20:00:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:55.402 20:00:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:55.402 9 00:04:55.402 20:00:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:55.402 20:00:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:55.402 20:00:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:55.402 20:00:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:55.402 10 00:04:55.402 20:00:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:55.402 20:00:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:55.402 20:00:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:55.402 20:00:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:55.402 20:00:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:55.402 20:00:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:55.402 20:00:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:55.402 20:00:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:55.402 20:00:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:55.402 20:00:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:55.402 20:00:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:55.402 20:00:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:55.402 20:00:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:56.339 20:00:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:56.339 20:00:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:56.339 20:00:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:56.339 20:00:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:56.339 20:00:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:57.717 ************************************ 00:04:57.717 END TEST scheduler_create_thread 00:04:57.717 ************************************ 00:04:57.717 20:00:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:57.717 00:04:57.717 real 0m2.138s 00:04:57.717 user 0m0.026s 00:04:57.717 sys 0m0.009s 00:04:57.717 20:00:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:57.717 20:00:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:57.717 20:00:29 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:57.717 20:00:29 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58160 00:04:57.717 20:00:29 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 58160 ']' 00:04:57.717 20:00:29 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 58160 00:04:57.717 20:00:29 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:04:57.717 20:00:29 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:57.717 20:00:29 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58160 00:04:57.717 killing process with pid 58160 00:04:57.717 20:00:29 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:04:57.717 20:00:29 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:04:57.717 20:00:29 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58160' 00:04:57.717 20:00:29 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 58160 00:04:57.717 20:00:29 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 58160 00:04:57.976 [2024-12-08 20:00:29.742444] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:58.916 00:04:58.916 real 0m5.223s 00:04:58.916 user 0m8.635s 00:04:58.916 sys 0m0.508s 00:04:58.916 20:00:30 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:58.916 20:00:30 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:58.916 ************************************ 00:04:58.916 END TEST event_scheduler 00:04:58.916 ************************************ 00:04:59.177 20:00:30 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:59.177 20:00:30 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:59.177 20:00:30 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:59.177 20:00:30 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:59.177 20:00:30 event -- common/autotest_common.sh@10 -- # set +x 00:04:59.177 ************************************ 00:04:59.177 START TEST app_repeat 00:04:59.177 ************************************ 00:04:59.177 20:00:30 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:04:59.177 20:00:30 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:59.177 20:00:30 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:59.177 20:00:30 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:59.177 20:00:30 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:59.177 20:00:30 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:59.177 20:00:30 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:59.177 20:00:30 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:59.177 20:00:30 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58261 00:04:59.177 20:00:30 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:59.177 20:00:30 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:59.177 Process app_repeat pid: 58261 00:04:59.177 20:00:30 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58261' 00:04:59.177 spdk_app_start Round 0 00:04:59.177 20:00:30 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:59.177 20:00:30 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:59.177 20:00:30 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58261 /var/tmp/spdk-nbd.sock 00:04:59.177 20:00:30 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58261 ']' 00:04:59.177 20:00:30 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:59.177 20:00:30 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:59.177 20:00:30 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:59.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:59.177 20:00:30 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:59.177 20:00:30 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:59.177 [2024-12-08 20:00:31.003448] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:04:59.177 [2024-12-08 20:00:31.003634] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58261 ] 00:04:59.437 [2024-12-08 20:00:31.176913] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:59.438 [2024-12-08 20:00:31.284506] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:59.438 [2024-12-08 20:00:31.284509] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.036 20:00:31 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:00.036 20:00:31 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:00.036 20:00:31 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:00.296 Malloc0 00:05:00.296 20:00:32 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:00.556 Malloc1 00:05:00.556 20:00:32 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:00.556 20:00:32 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:00.556 20:00:32 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:00.556 20:00:32 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:00.556 20:00:32 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:00.556 20:00:32 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:00.556 20:00:32 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:00.556 20:00:32 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:00.556 20:00:32 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:00.556 20:00:32 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:00.556 20:00:32 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:00.556 20:00:32 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:00.556 20:00:32 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:00.556 20:00:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:00.556 20:00:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:00.556 20:00:32 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:00.816 /dev/nbd0 00:05:00.816 20:00:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:00.816 20:00:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:00.816 20:00:32 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:00.816 20:00:32 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:00.816 20:00:32 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:00.816 20:00:32 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:00.816 20:00:32 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:00.816 20:00:32 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:00.816 20:00:32 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:00.816 20:00:32 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:00.816 20:00:32 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:00.816 1+0 records in 00:05:00.816 1+0 records out 00:05:00.816 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000339706 s, 12.1 MB/s 00:05:00.816 20:00:32 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:00.816 20:00:32 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:00.816 20:00:32 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:00.816 20:00:32 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:00.816 20:00:32 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:00.816 20:00:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:00.816 20:00:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:00.816 20:00:32 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:01.076 /dev/nbd1 00:05:01.076 20:00:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:01.076 20:00:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:01.076 20:00:32 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:01.076 20:00:32 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:01.076 20:00:32 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:01.076 20:00:32 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:01.076 20:00:32 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:01.076 20:00:32 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:01.076 20:00:32 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:01.076 20:00:32 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:01.076 20:00:32 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:01.076 1+0 records in 00:05:01.076 1+0 records out 00:05:01.076 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000490254 s, 8.4 MB/s 00:05:01.076 20:00:32 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:01.076 20:00:32 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:01.076 20:00:32 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:01.076 20:00:32 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:01.076 20:00:32 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:01.076 20:00:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:01.076 20:00:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:01.077 20:00:32 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:01.077 20:00:32 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:01.077 20:00:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:01.337 20:00:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:01.337 { 00:05:01.337 "nbd_device": "/dev/nbd0", 00:05:01.337 "bdev_name": "Malloc0" 00:05:01.337 }, 00:05:01.337 { 00:05:01.337 "nbd_device": "/dev/nbd1", 00:05:01.337 "bdev_name": "Malloc1" 00:05:01.337 } 00:05:01.337 ]' 00:05:01.337 20:00:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:01.337 { 00:05:01.337 "nbd_device": "/dev/nbd0", 00:05:01.337 "bdev_name": "Malloc0" 00:05:01.337 }, 00:05:01.337 { 00:05:01.337 "nbd_device": "/dev/nbd1", 00:05:01.337 "bdev_name": "Malloc1" 00:05:01.337 } 00:05:01.337 ]' 00:05:01.337 20:00:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:01.337 20:00:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:01.337 /dev/nbd1' 00:05:01.337 20:00:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:01.337 /dev/nbd1' 00:05:01.337 20:00:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:01.337 20:00:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:01.337 20:00:33 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:01.337 20:00:33 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:01.337 20:00:33 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:01.337 20:00:33 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:01.337 20:00:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:01.337 20:00:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:01.337 20:00:33 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:01.337 20:00:33 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:01.337 20:00:33 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:01.337 20:00:33 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:01.337 256+0 records in 00:05:01.337 256+0 records out 00:05:01.337 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0145163 s, 72.2 MB/s 00:05:01.337 20:00:33 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:01.337 20:00:33 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:01.337 256+0 records in 00:05:01.337 256+0 records out 00:05:01.337 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0248504 s, 42.2 MB/s 00:05:01.337 20:00:33 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:01.337 20:00:33 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:01.337 256+0 records in 00:05:01.337 256+0 records out 00:05:01.337 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.02782 s, 37.7 MB/s 00:05:01.337 20:00:33 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:01.337 20:00:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:01.337 20:00:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:01.337 20:00:33 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:01.337 20:00:33 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:01.337 20:00:33 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:01.337 20:00:33 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:01.337 20:00:33 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:01.337 20:00:33 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:01.337 20:00:33 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:01.337 20:00:33 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:01.337 20:00:33 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:01.337 20:00:33 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:01.337 20:00:33 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:01.337 20:00:33 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:01.337 20:00:33 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:01.337 20:00:33 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:01.337 20:00:33 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:01.337 20:00:33 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:01.597 20:00:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:01.597 20:00:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:01.597 20:00:33 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:01.597 20:00:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:01.597 20:00:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:01.597 20:00:33 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:01.597 20:00:33 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:01.597 20:00:33 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:01.598 20:00:33 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:01.598 20:00:33 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:01.857 20:00:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:01.857 20:00:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:01.857 20:00:33 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:01.857 20:00:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:01.857 20:00:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:01.857 20:00:33 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:01.857 20:00:33 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:01.857 20:00:33 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:01.858 20:00:33 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:01.858 20:00:33 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:01.858 20:00:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:02.117 20:00:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:02.117 20:00:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:02.117 20:00:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:02.117 20:00:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:02.117 20:00:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:02.117 20:00:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:02.117 20:00:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:02.117 20:00:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:02.117 20:00:33 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:02.117 20:00:33 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:02.117 20:00:33 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:02.117 20:00:33 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:02.117 20:00:33 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:02.378 20:00:34 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:03.765 [2024-12-08 20:00:35.446289] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:03.765 [2024-12-08 20:00:35.546158] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:03.765 [2024-12-08 20:00:35.546161] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:03.765 [2024-12-08 20:00:35.730028] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:03.765 [2024-12-08 20:00:35.730120] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:05.672 spdk_app_start Round 1 00:05:05.672 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:05.672 20:00:37 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:05.672 20:00:37 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:05.672 20:00:37 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58261 /var/tmp/spdk-nbd.sock 00:05:05.672 20:00:37 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58261 ']' 00:05:05.672 20:00:37 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:05.672 20:00:37 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:05.672 20:00:37 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:05.672 20:00:37 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:05.672 20:00:37 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:05.672 20:00:37 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:05.672 20:00:37 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:05.672 20:00:37 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:05.932 Malloc0 00:05:05.932 20:00:37 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:06.190 Malloc1 00:05:06.190 20:00:38 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:06.190 20:00:38 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:06.190 20:00:38 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:06.190 20:00:38 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:06.190 20:00:38 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:06.190 20:00:38 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:06.190 20:00:38 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:06.190 20:00:38 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:06.190 20:00:38 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:06.190 20:00:38 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:06.190 20:00:38 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:06.190 20:00:38 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:06.190 20:00:38 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:06.190 20:00:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:06.190 20:00:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:06.190 20:00:38 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:06.449 /dev/nbd0 00:05:06.449 20:00:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:06.449 20:00:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:06.449 20:00:38 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:06.449 20:00:38 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:06.449 20:00:38 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:06.449 20:00:38 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:06.449 20:00:38 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:06.449 20:00:38 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:06.449 20:00:38 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:06.449 20:00:38 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:06.449 20:00:38 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:06.449 1+0 records in 00:05:06.449 1+0 records out 00:05:06.449 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00021704 s, 18.9 MB/s 00:05:06.449 20:00:38 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:06.449 20:00:38 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:06.449 20:00:38 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:06.449 20:00:38 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:06.449 20:00:38 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:06.449 20:00:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:06.449 20:00:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:06.449 20:00:38 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:06.708 /dev/nbd1 00:05:06.708 20:00:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:06.708 20:00:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:06.708 20:00:38 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:06.708 20:00:38 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:06.708 20:00:38 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:06.708 20:00:38 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:06.708 20:00:38 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:06.708 20:00:38 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:06.708 20:00:38 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:06.708 20:00:38 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:06.708 20:00:38 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:06.708 1+0 records in 00:05:06.708 1+0 records out 00:05:06.708 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000201429 s, 20.3 MB/s 00:05:06.708 20:00:38 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:06.708 20:00:38 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:06.708 20:00:38 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:06.708 20:00:38 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:06.708 20:00:38 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:06.708 20:00:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:06.708 20:00:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:06.708 20:00:38 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:06.708 20:00:38 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:06.708 20:00:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:06.968 20:00:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:06.968 { 00:05:06.968 "nbd_device": "/dev/nbd0", 00:05:06.968 "bdev_name": "Malloc0" 00:05:06.968 }, 00:05:06.968 { 00:05:06.968 "nbd_device": "/dev/nbd1", 00:05:06.968 "bdev_name": "Malloc1" 00:05:06.968 } 00:05:06.968 ]' 00:05:06.968 20:00:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:06.968 20:00:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:06.968 { 00:05:06.968 "nbd_device": "/dev/nbd0", 00:05:06.968 "bdev_name": "Malloc0" 00:05:06.968 }, 00:05:06.968 { 00:05:06.968 "nbd_device": "/dev/nbd1", 00:05:06.968 "bdev_name": "Malloc1" 00:05:06.968 } 00:05:06.968 ]' 00:05:06.968 20:00:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:06.968 /dev/nbd1' 00:05:06.968 20:00:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:06.968 /dev/nbd1' 00:05:06.968 20:00:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:06.968 20:00:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:06.968 20:00:38 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:06.968 20:00:38 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:06.968 20:00:38 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:06.968 20:00:38 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:06.968 20:00:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:06.968 20:00:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:06.968 20:00:38 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:06.968 20:00:38 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:06.968 20:00:38 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:06.968 20:00:38 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:06.968 256+0 records in 00:05:06.968 256+0 records out 00:05:06.968 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0126958 s, 82.6 MB/s 00:05:06.968 20:00:38 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:06.968 20:00:38 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:06.968 256+0 records in 00:05:06.968 256+0 records out 00:05:06.968 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0209613 s, 50.0 MB/s 00:05:06.968 20:00:38 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:06.968 20:00:38 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:06.968 256+0 records in 00:05:06.968 256+0 records out 00:05:06.968 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0249557 s, 42.0 MB/s 00:05:06.968 20:00:38 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:06.968 20:00:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:06.968 20:00:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:06.968 20:00:38 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:06.968 20:00:38 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:06.968 20:00:38 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:06.968 20:00:38 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:06.968 20:00:38 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:06.968 20:00:38 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:06.968 20:00:38 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:06.968 20:00:38 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:06.968 20:00:38 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:06.968 20:00:38 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:06.968 20:00:38 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:06.968 20:00:38 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:06.968 20:00:38 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:06.968 20:00:38 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:06.968 20:00:38 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:06.968 20:00:38 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:07.227 20:00:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:07.227 20:00:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:07.227 20:00:39 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:07.227 20:00:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:07.227 20:00:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:07.227 20:00:39 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:07.227 20:00:39 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:07.227 20:00:39 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:07.227 20:00:39 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:07.227 20:00:39 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:07.485 20:00:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:07.485 20:00:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:07.485 20:00:39 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:07.485 20:00:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:07.485 20:00:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:07.485 20:00:39 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:07.485 20:00:39 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:07.485 20:00:39 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:07.485 20:00:39 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:07.485 20:00:39 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:07.485 20:00:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:07.745 20:00:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:07.745 20:00:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:07.745 20:00:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:07.745 20:00:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:07.745 20:00:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:07.745 20:00:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:07.745 20:00:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:07.745 20:00:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:07.745 20:00:39 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:07.745 20:00:39 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:07.745 20:00:39 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:07.745 20:00:39 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:07.745 20:00:39 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:08.004 20:00:39 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:09.394 [2024-12-08 20:00:41.061094] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:09.394 [2024-12-08 20:00:41.164399] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.394 [2024-12-08 20:00:41.164427] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:09.394 [2024-12-08 20:00:41.352075] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:09.394 [2024-12-08 20:00:41.352124] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:11.301 spdk_app_start Round 2 00:05:11.301 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:11.301 20:00:42 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:11.301 20:00:42 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:11.301 20:00:42 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58261 /var/tmp/spdk-nbd.sock 00:05:11.301 20:00:42 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58261 ']' 00:05:11.301 20:00:42 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:11.301 20:00:42 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:11.301 20:00:42 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:11.301 20:00:42 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:11.301 20:00:42 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:11.301 20:00:43 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:11.301 20:00:43 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:11.301 20:00:43 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:11.560 Malloc0 00:05:11.560 20:00:43 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:11.820 Malloc1 00:05:11.820 20:00:43 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:11.820 20:00:43 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:11.820 20:00:43 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:11.820 20:00:43 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:11.820 20:00:43 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:11.820 20:00:43 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:11.820 20:00:43 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:11.820 20:00:43 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:11.820 20:00:43 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:11.820 20:00:43 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:11.820 20:00:43 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:11.820 20:00:43 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:11.820 20:00:43 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:11.820 20:00:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:11.820 20:00:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:11.820 20:00:43 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:12.080 /dev/nbd0 00:05:12.080 20:00:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:12.080 20:00:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:12.080 20:00:43 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:12.080 20:00:43 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:12.080 20:00:43 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:12.080 20:00:43 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:12.080 20:00:43 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:12.080 20:00:43 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:12.080 20:00:43 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:12.080 20:00:43 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:12.080 20:00:43 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:12.080 1+0 records in 00:05:12.080 1+0 records out 00:05:12.080 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00033591 s, 12.2 MB/s 00:05:12.080 20:00:43 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:12.080 20:00:43 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:12.080 20:00:43 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:12.080 20:00:43 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:12.080 20:00:43 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:12.080 20:00:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:12.080 20:00:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:12.080 20:00:43 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:12.340 /dev/nbd1 00:05:12.340 20:00:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:12.340 20:00:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:12.340 20:00:44 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:12.340 20:00:44 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:12.340 20:00:44 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:12.340 20:00:44 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:12.340 20:00:44 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:12.340 20:00:44 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:12.340 20:00:44 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:12.340 20:00:44 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:12.340 20:00:44 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:12.340 1+0 records in 00:05:12.340 1+0 records out 00:05:12.340 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000316568 s, 12.9 MB/s 00:05:12.340 20:00:44 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:12.340 20:00:44 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:12.340 20:00:44 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:12.340 20:00:44 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:12.340 20:00:44 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:12.340 20:00:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:12.340 20:00:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:12.340 20:00:44 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:12.340 20:00:44 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:12.340 20:00:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:12.600 20:00:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:12.600 { 00:05:12.600 "nbd_device": "/dev/nbd0", 00:05:12.600 "bdev_name": "Malloc0" 00:05:12.600 }, 00:05:12.600 { 00:05:12.600 "nbd_device": "/dev/nbd1", 00:05:12.600 "bdev_name": "Malloc1" 00:05:12.600 } 00:05:12.600 ]' 00:05:12.601 20:00:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:12.601 { 00:05:12.601 "nbd_device": "/dev/nbd0", 00:05:12.601 "bdev_name": "Malloc0" 00:05:12.601 }, 00:05:12.601 { 00:05:12.601 "nbd_device": "/dev/nbd1", 00:05:12.601 "bdev_name": "Malloc1" 00:05:12.601 } 00:05:12.601 ]' 00:05:12.601 20:00:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:12.601 20:00:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:12.601 /dev/nbd1' 00:05:12.601 20:00:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:12.601 /dev/nbd1' 00:05:12.601 20:00:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:12.601 20:00:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:12.601 20:00:44 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:12.601 20:00:44 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:12.601 20:00:44 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:12.601 20:00:44 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:12.601 20:00:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:12.601 20:00:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:12.601 20:00:44 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:12.601 20:00:44 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:12.601 20:00:44 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:12.601 20:00:44 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:12.601 256+0 records in 00:05:12.601 256+0 records out 00:05:12.601 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0133404 s, 78.6 MB/s 00:05:12.601 20:00:44 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:12.601 20:00:44 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:12.601 256+0 records in 00:05:12.601 256+0 records out 00:05:12.601 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0242611 s, 43.2 MB/s 00:05:12.601 20:00:44 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:12.601 20:00:44 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:12.601 256+0 records in 00:05:12.601 256+0 records out 00:05:12.601 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0247015 s, 42.4 MB/s 00:05:12.601 20:00:44 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:12.601 20:00:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:12.601 20:00:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:12.601 20:00:44 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:12.601 20:00:44 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:12.601 20:00:44 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:12.601 20:00:44 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:12.601 20:00:44 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:12.601 20:00:44 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:12.601 20:00:44 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:12.601 20:00:44 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:12.601 20:00:44 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:12.601 20:00:44 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:12.601 20:00:44 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:12.601 20:00:44 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:12.601 20:00:44 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:12.601 20:00:44 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:12.601 20:00:44 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:12.601 20:00:44 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:12.861 20:00:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:12.861 20:00:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:12.861 20:00:44 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:12.861 20:00:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:12.861 20:00:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:12.861 20:00:44 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:12.861 20:00:44 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:12.861 20:00:44 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:12.861 20:00:44 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:12.861 20:00:44 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:13.121 20:00:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:13.121 20:00:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:13.121 20:00:44 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:13.121 20:00:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:13.121 20:00:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:13.121 20:00:44 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:13.121 20:00:44 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:13.121 20:00:44 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:13.121 20:00:44 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:13.121 20:00:44 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:13.121 20:00:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:13.379 20:00:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:13.379 20:00:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:13.379 20:00:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:13.379 20:00:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:13.379 20:00:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:13.379 20:00:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:13.379 20:00:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:13.379 20:00:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:13.379 20:00:45 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:13.379 20:00:45 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:13.379 20:00:45 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:13.379 20:00:45 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:13.379 20:00:45 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:13.947 20:00:45 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:14.885 [2024-12-08 20:00:46.753925] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:14.885 [2024-12-08 20:00:46.862373] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.885 [2024-12-08 20:00:46.862376] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:15.143 [2024-12-08 20:00:47.044447] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:15.143 [2024-12-08 20:00:47.044533] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:17.047 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:17.047 20:00:48 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58261 /var/tmp/spdk-nbd.sock 00:05:17.047 20:00:48 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58261 ']' 00:05:17.047 20:00:48 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:17.047 20:00:48 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:17.047 20:00:48 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:17.047 20:00:48 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:17.047 20:00:48 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:17.047 20:00:48 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:17.047 20:00:48 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:17.047 20:00:48 event.app_repeat -- event/event.sh@39 -- # killprocess 58261 00:05:17.047 20:00:48 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 58261 ']' 00:05:17.047 20:00:48 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 58261 00:05:17.047 20:00:48 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:05:17.047 20:00:48 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:17.047 20:00:48 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58261 00:05:17.047 killing process with pid 58261 00:05:17.047 20:00:48 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:17.047 20:00:48 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:17.047 20:00:48 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58261' 00:05:17.047 20:00:48 event.app_repeat -- common/autotest_common.sh@973 -- # kill 58261 00:05:17.047 20:00:48 event.app_repeat -- common/autotest_common.sh@978 -- # wait 58261 00:05:17.981 spdk_app_start is called in Round 0. 00:05:17.981 Shutdown signal received, stop current app iteration 00:05:17.982 Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 reinitialization... 00:05:17.982 spdk_app_start is called in Round 1. 00:05:17.982 Shutdown signal received, stop current app iteration 00:05:17.982 Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 reinitialization... 00:05:17.982 spdk_app_start is called in Round 2. 00:05:17.982 Shutdown signal received, stop current app iteration 00:05:17.982 Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 reinitialization... 00:05:17.982 spdk_app_start is called in Round 3. 00:05:17.982 Shutdown signal received, stop current app iteration 00:05:17.982 ************************************ 00:05:17.982 END TEST app_repeat 00:05:17.982 ************************************ 00:05:17.982 20:00:49 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:17.982 20:00:49 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:17.982 00:05:17.982 real 0m18.986s 00:05:17.982 user 0m40.587s 00:05:17.982 sys 0m2.699s 00:05:17.982 20:00:49 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:17.982 20:00:49 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:18.241 20:00:49 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:18.241 20:00:49 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:18.241 20:00:49 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:18.241 20:00:49 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:18.241 20:00:49 event -- common/autotest_common.sh@10 -- # set +x 00:05:18.241 ************************************ 00:05:18.241 START TEST cpu_locks 00:05:18.241 ************************************ 00:05:18.241 20:00:49 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:18.241 * Looking for test storage... 00:05:18.241 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:18.241 20:00:50 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:18.241 20:00:50 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:05:18.241 20:00:50 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:18.241 20:00:50 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:18.241 20:00:50 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:18.241 20:00:50 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:18.241 20:00:50 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:18.241 20:00:50 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:18.241 20:00:50 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:18.241 20:00:50 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:18.241 20:00:50 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:18.241 20:00:50 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:18.241 20:00:50 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:18.241 20:00:50 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:18.241 20:00:50 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:18.241 20:00:50 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:18.241 20:00:50 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:18.241 20:00:50 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:18.241 20:00:50 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:18.241 20:00:50 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:18.241 20:00:50 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:18.241 20:00:50 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:18.241 20:00:50 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:18.241 20:00:50 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:18.241 20:00:50 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:18.241 20:00:50 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:18.241 20:00:50 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:18.241 20:00:50 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:18.241 20:00:50 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:18.241 20:00:50 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:18.241 20:00:50 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:18.241 20:00:50 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:18.241 20:00:50 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:18.241 20:00:50 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:18.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.241 --rc genhtml_branch_coverage=1 00:05:18.241 --rc genhtml_function_coverage=1 00:05:18.241 --rc genhtml_legend=1 00:05:18.241 --rc geninfo_all_blocks=1 00:05:18.241 --rc geninfo_unexecuted_blocks=1 00:05:18.241 00:05:18.241 ' 00:05:18.241 20:00:50 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:18.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.241 --rc genhtml_branch_coverage=1 00:05:18.241 --rc genhtml_function_coverage=1 00:05:18.241 --rc genhtml_legend=1 00:05:18.241 --rc geninfo_all_blocks=1 00:05:18.241 --rc geninfo_unexecuted_blocks=1 00:05:18.241 00:05:18.241 ' 00:05:18.241 20:00:50 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:18.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.241 --rc genhtml_branch_coverage=1 00:05:18.241 --rc genhtml_function_coverage=1 00:05:18.241 --rc genhtml_legend=1 00:05:18.241 --rc geninfo_all_blocks=1 00:05:18.241 --rc geninfo_unexecuted_blocks=1 00:05:18.241 00:05:18.241 ' 00:05:18.241 20:00:50 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:18.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.241 --rc genhtml_branch_coverage=1 00:05:18.241 --rc genhtml_function_coverage=1 00:05:18.241 --rc genhtml_legend=1 00:05:18.241 --rc geninfo_all_blocks=1 00:05:18.241 --rc geninfo_unexecuted_blocks=1 00:05:18.241 00:05:18.241 ' 00:05:18.241 20:00:50 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:18.241 20:00:50 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:18.241 20:00:50 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:18.241 20:00:50 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:18.241 20:00:50 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:18.241 20:00:50 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:18.241 20:00:50 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:18.500 ************************************ 00:05:18.500 START TEST default_locks 00:05:18.500 ************************************ 00:05:18.500 20:00:50 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:05:18.500 20:00:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58708 00:05:18.500 20:00:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:18.500 20:00:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58708 00:05:18.500 20:00:50 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58708 ']' 00:05:18.500 20:00:50 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:18.500 20:00:50 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:18.500 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:18.500 20:00:50 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:18.500 20:00:50 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:18.500 20:00:50 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:18.500 [2024-12-08 20:00:50.320374] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:05:18.500 [2024-12-08 20:00:50.320499] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58708 ] 00:05:18.759 [2024-12-08 20:00:50.494127] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:18.759 [2024-12-08 20:00:50.603056] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.702 20:00:51 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:19.702 20:00:51 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:05:19.702 20:00:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58708 00:05:19.702 20:00:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58708 00:05:19.702 20:00:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:19.966 20:00:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58708 00:05:19.966 20:00:51 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 58708 ']' 00:05:19.966 20:00:51 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 58708 00:05:19.966 20:00:51 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:05:19.966 20:00:51 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:19.966 20:00:51 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58708 00:05:19.966 20:00:51 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:19.966 20:00:51 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:19.966 killing process with pid 58708 00:05:19.966 20:00:51 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58708' 00:05:19.966 20:00:51 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 58708 00:05:19.966 20:00:51 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 58708 00:05:22.500 20:00:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58708 00:05:22.500 20:00:54 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:05:22.500 20:00:54 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 58708 00:05:22.500 20:00:54 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:22.500 20:00:54 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:22.500 20:00:54 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:22.500 20:00:54 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:22.500 20:00:54 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 58708 00:05:22.500 20:00:54 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58708 ']' 00:05:22.500 20:00:54 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:22.500 20:00:54 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:22.500 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:22.500 20:00:54 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:22.500 20:00:54 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:22.500 20:00:54 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:22.500 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (58708) - No such process 00:05:22.500 ERROR: process (pid: 58708) is no longer running 00:05:22.500 20:00:54 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:22.500 20:00:54 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:05:22.500 20:00:54 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:05:22.500 20:00:54 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:22.501 20:00:54 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:22.501 20:00:54 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:22.501 20:00:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:22.501 20:00:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:22.501 20:00:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:22.501 20:00:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:22.501 00:05:22.501 real 0m4.012s 00:05:22.501 user 0m3.960s 00:05:22.501 sys 0m0.648s 00:05:22.501 20:00:54 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:22.501 20:00:54 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:22.501 ************************************ 00:05:22.501 END TEST default_locks 00:05:22.501 ************************************ 00:05:22.501 20:00:54 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:22.501 20:00:54 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:22.501 20:00:54 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:22.501 20:00:54 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:22.501 ************************************ 00:05:22.501 START TEST default_locks_via_rpc 00:05:22.501 ************************************ 00:05:22.501 20:00:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:05:22.501 20:00:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=58783 00:05:22.501 20:00:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:22.501 20:00:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 58783 00:05:22.501 20:00:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58783 ']' 00:05:22.501 20:00:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:22.501 20:00:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:22.501 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:22.501 20:00:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:22.501 20:00:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:22.501 20:00:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:22.501 [2024-12-08 20:00:54.395517] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:05:22.501 [2024-12-08 20:00:54.395660] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58783 ] 00:05:22.760 [2024-12-08 20:00:54.567066] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.760 [2024-12-08 20:00:54.673572] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.699 20:00:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:23.699 20:00:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:23.699 20:00:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:23.699 20:00:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:23.699 20:00:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:23.699 20:00:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:23.699 20:00:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:23.699 20:00:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:23.699 20:00:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:23.699 20:00:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:23.699 20:00:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:23.699 20:00:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:23.699 20:00:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:23.699 20:00:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:23.699 20:00:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 58783 00:05:23.699 20:00:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 58783 00:05:23.699 20:00:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:24.267 20:00:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 58783 00:05:24.267 20:00:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 58783 ']' 00:05:24.267 20:00:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 58783 00:05:24.267 20:00:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:05:24.267 20:00:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:24.267 20:00:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58783 00:05:24.267 20:00:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:24.267 20:00:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:24.267 killing process with pid 58783 00:05:24.267 20:00:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58783' 00:05:24.267 20:00:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 58783 00:05:24.267 20:00:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 58783 00:05:26.800 00:05:26.800 real 0m4.006s 00:05:26.800 user 0m3.963s 00:05:26.800 sys 0m0.647s 00:05:26.800 20:00:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:26.800 20:00:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:26.800 ************************************ 00:05:26.800 END TEST default_locks_via_rpc 00:05:26.800 ************************************ 00:05:26.800 20:00:58 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:26.800 20:00:58 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:26.800 20:00:58 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:26.800 20:00:58 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:26.800 ************************************ 00:05:26.800 START TEST non_locking_app_on_locked_coremask 00:05:26.800 ************************************ 00:05:26.800 20:00:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:05:26.800 20:00:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=58852 00:05:26.800 20:00:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:26.800 20:00:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 58852 /var/tmp/spdk.sock 00:05:26.800 20:00:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58852 ']' 00:05:26.800 20:00:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:26.800 20:00:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:26.800 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:26.800 20:00:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:26.800 20:00:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:26.800 20:00:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:26.800 [2024-12-08 20:00:58.464547] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:05:26.800 [2024-12-08 20:00:58.464652] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58852 ] 00:05:26.800 [2024-12-08 20:00:58.638772] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:26.800 [2024-12-08 20:00:58.745520] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.736 20:00:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:27.736 20:00:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:27.736 20:00:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:27.736 20:00:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=58872 00:05:27.736 20:00:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 58872 /var/tmp/spdk2.sock 00:05:27.736 20:00:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58872 ']' 00:05:27.736 20:00:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:27.736 20:00:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:27.736 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:27.736 20:00:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:27.736 20:00:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:27.736 20:00:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:27.736 [2024-12-08 20:00:59.647337] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:05:27.736 [2024-12-08 20:00:59.647534] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58872 ] 00:05:27.994 [2024-12-08 20:00:59.814098] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:27.994 [2024-12-08 20:00:59.814148] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.252 [2024-12-08 20:01:00.036425] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.805 20:01:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:30.805 20:01:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:30.805 20:01:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 58852 00:05:30.805 20:01:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58852 00:05:30.805 20:01:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:30.805 20:01:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 58852 00:05:30.805 20:01:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58852 ']' 00:05:30.805 20:01:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58852 00:05:30.805 20:01:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:30.805 20:01:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:30.805 20:01:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58852 00:05:30.805 20:01:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:30.805 killing process with pid 58852 00:05:30.805 20:01:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:30.805 20:01:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58852' 00:05:30.805 20:01:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58852 00:05:30.805 20:01:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58852 00:05:36.083 20:01:07 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 58872 00:05:36.083 20:01:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58872 ']' 00:05:36.083 20:01:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58872 00:05:36.083 20:01:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:36.083 20:01:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:36.083 20:01:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58872 00:05:36.083 20:01:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:36.083 killing process with pid 58872 00:05:36.083 20:01:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:36.083 20:01:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58872' 00:05:36.084 20:01:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58872 00:05:36.084 20:01:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58872 00:05:37.992 00:05:37.993 real 0m11.186s 00:05:37.993 user 0m11.389s 00:05:37.993 sys 0m1.182s 00:05:37.993 20:01:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:37.993 20:01:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:37.993 ************************************ 00:05:37.993 END TEST non_locking_app_on_locked_coremask 00:05:37.993 ************************************ 00:05:37.993 20:01:09 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:37.993 20:01:09 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:37.993 20:01:09 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:37.993 20:01:09 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:37.993 ************************************ 00:05:37.993 START TEST locking_app_on_unlocked_coremask 00:05:37.993 ************************************ 00:05:37.993 20:01:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:05:37.993 20:01:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=59016 00:05:37.993 20:01:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:37.993 20:01:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 59016 /var/tmp/spdk.sock 00:05:37.993 20:01:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59016 ']' 00:05:37.993 20:01:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:37.993 20:01:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:37.993 20:01:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:37.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:37.993 20:01:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:37.993 20:01:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:37.993 [2024-12-08 20:01:09.716455] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:05:37.993 [2024-12-08 20:01:09.716662] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59016 ] 00:05:37.993 [2024-12-08 20:01:09.893118] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:37.993 [2024-12-08 20:01:09.893267] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.253 [2024-12-08 20:01:10.003005] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.192 20:01:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:39.192 20:01:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:39.192 20:01:10 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=59032 00:05:39.192 20:01:10 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:39.192 20:01:10 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 59032 /var/tmp/spdk2.sock 00:05:39.192 20:01:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59032 ']' 00:05:39.192 20:01:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:39.192 20:01:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:39.192 20:01:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:39.192 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:39.192 20:01:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:39.192 20:01:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:39.192 [2024-12-08 20:01:10.904919] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:05:39.192 [2024-12-08 20:01:10.905123] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59032 ] 00:05:39.192 [2024-12-08 20:01:11.071441] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.451 [2024-12-08 20:01:11.290729] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.984 20:01:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:41.984 20:01:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:41.985 20:01:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 59032 00:05:41.985 20:01:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59032 00:05:41.985 20:01:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:41.985 20:01:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 59016 00:05:41.985 20:01:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59016 ']' 00:05:41.985 20:01:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59016 00:05:41.985 20:01:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:41.985 20:01:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:41.985 20:01:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59016 00:05:41.985 killing process with pid 59016 00:05:41.985 20:01:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:41.985 20:01:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:41.985 20:01:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59016' 00:05:41.985 20:01:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59016 00:05:41.985 20:01:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59016 00:05:47.253 20:01:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 59032 00:05:47.253 20:01:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59032 ']' 00:05:47.253 20:01:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59032 00:05:47.253 20:01:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:47.253 20:01:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:47.253 20:01:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59032 00:05:47.253 20:01:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:47.253 20:01:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:47.253 killing process with pid 59032 00:05:47.253 20:01:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59032' 00:05:47.253 20:01:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59032 00:05:47.253 20:01:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59032 00:05:49.159 00:05:49.159 real 0m11.159s 00:05:49.159 user 0m11.357s 00:05:49.159 sys 0m1.161s 00:05:49.159 20:01:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:49.159 20:01:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:49.159 ************************************ 00:05:49.159 END TEST locking_app_on_unlocked_coremask 00:05:49.159 ************************************ 00:05:49.159 20:01:20 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:49.159 20:01:20 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:49.159 20:01:20 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:49.159 20:01:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:49.159 ************************************ 00:05:49.159 START TEST locking_app_on_locked_coremask 00:05:49.159 ************************************ 00:05:49.159 20:01:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:05:49.159 20:01:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59183 00:05:49.159 20:01:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:49.159 20:01:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59183 /var/tmp/spdk.sock 00:05:49.159 20:01:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59183 ']' 00:05:49.159 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:49.159 20:01:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:49.159 20:01:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:49.159 20:01:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:49.159 20:01:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:49.159 20:01:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:49.159 [2024-12-08 20:01:20.936400] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:05:49.159 [2024-12-08 20:01:20.937004] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59183 ] 00:05:49.159 [2024-12-08 20:01:21.096082] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.419 [2024-12-08 20:01:21.204683] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.358 20:01:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:50.358 20:01:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:50.358 20:01:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59199 00:05:50.358 20:01:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:50.358 20:01:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59199 /var/tmp/spdk2.sock 00:05:50.358 20:01:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:50.358 20:01:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59199 /var/tmp/spdk2.sock 00:05:50.358 20:01:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:50.358 20:01:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:50.358 20:01:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:50.358 20:01:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:50.358 20:01:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59199 /var/tmp/spdk2.sock 00:05:50.358 20:01:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59199 ']' 00:05:50.358 20:01:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:50.358 20:01:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:50.358 20:01:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:50.358 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:50.358 20:01:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:50.358 20:01:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:50.358 [2024-12-08 20:01:22.110894] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:05:50.358 [2024-12-08 20:01:22.111441] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59199 ] 00:05:50.358 [2024-12-08 20:01:22.274814] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59183 has claimed it. 00:05:50.358 [2024-12-08 20:01:22.274891] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:50.927 ERROR: process (pid: 59199) is no longer running 00:05:50.927 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59199) - No such process 00:05:50.927 20:01:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:50.927 20:01:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:50.927 20:01:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:50.927 20:01:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:50.927 20:01:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:50.927 20:01:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:50.927 20:01:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59183 00:05:50.927 20:01:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59183 00:05:50.927 20:01:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:51.496 20:01:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59183 00:05:51.496 20:01:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59183 ']' 00:05:51.496 20:01:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59183 00:05:51.496 20:01:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:51.496 20:01:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:51.496 20:01:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59183 00:05:51.496 20:01:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:51.496 20:01:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:51.496 20:01:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59183' 00:05:51.496 killing process with pid 59183 00:05:51.496 20:01:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59183 00:05:51.496 20:01:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59183 00:05:54.066 00:05:54.066 real 0m4.696s 00:05:54.066 user 0m4.856s 00:05:54.066 sys 0m0.784s 00:05:54.066 20:01:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:54.066 ************************************ 00:05:54.066 END TEST locking_app_on_locked_coremask 00:05:54.066 ************************************ 00:05:54.066 20:01:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:54.066 20:01:25 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:54.066 20:01:25 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:54.066 20:01:25 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:54.066 20:01:25 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:54.066 ************************************ 00:05:54.066 START TEST locking_overlapped_coremask 00:05:54.066 ************************************ 00:05:54.066 20:01:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:05:54.066 20:01:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59266 00:05:54.066 20:01:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:05:54.066 20:01:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59266 /var/tmp/spdk.sock 00:05:54.066 20:01:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59266 ']' 00:05:54.066 20:01:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:54.066 20:01:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:54.066 20:01:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:54.066 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:54.066 20:01:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:54.066 20:01:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:54.066 [2024-12-08 20:01:25.695521] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:05:54.066 [2024-12-08 20:01:25.695637] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59266 ] 00:05:54.066 [2024-12-08 20:01:25.872112] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:54.066 [2024-12-08 20:01:25.982939] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:54.066 [2024-12-08 20:01:25.983078] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.066 [2024-12-08 20:01:25.983113] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:55.002 20:01:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:55.002 20:01:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:55.002 20:01:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:55.002 20:01:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59289 00:05:55.002 20:01:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59289 /var/tmp/spdk2.sock 00:05:55.002 20:01:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:55.002 20:01:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59289 /var/tmp/spdk2.sock 00:05:55.002 20:01:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:55.002 20:01:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:55.002 20:01:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:55.002 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:55.002 20:01:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:55.002 20:01:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59289 /var/tmp/spdk2.sock 00:05:55.002 20:01:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59289 ']' 00:05:55.002 20:01:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:55.002 20:01:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:55.002 20:01:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:55.002 20:01:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:55.002 20:01:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:55.002 [2024-12-08 20:01:26.897513] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:05:55.002 [2024-12-08 20:01:26.897627] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59289 ] 00:05:55.259 [2024-12-08 20:01:27.072160] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59266 has claimed it. 00:05:55.260 [2024-12-08 20:01:27.072258] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:55.826 ERROR: process (pid: 59289) is no longer running 00:05:55.826 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59289) - No such process 00:05:55.826 20:01:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:55.826 20:01:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:55.826 20:01:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:55.826 20:01:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:55.826 20:01:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:55.826 20:01:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:55.826 20:01:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:55.826 20:01:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:55.826 20:01:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:55.826 20:01:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:55.826 20:01:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59266 00:05:55.826 20:01:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 59266 ']' 00:05:55.826 20:01:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 59266 00:05:55.826 20:01:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:05:55.827 20:01:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:55.827 20:01:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59266 00:05:55.827 killing process with pid 59266 00:05:55.827 20:01:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:55.827 20:01:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:55.827 20:01:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59266' 00:05:55.827 20:01:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 59266 00:05:55.827 20:01:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 59266 00:05:58.375 00:05:58.375 real 0m4.355s 00:05:58.375 user 0m11.783s 00:05:58.375 sys 0m0.579s 00:05:58.375 20:01:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:58.375 20:01:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:58.375 ************************************ 00:05:58.375 END TEST locking_overlapped_coremask 00:05:58.375 ************************************ 00:05:58.375 20:01:30 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:58.375 20:01:30 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:58.375 20:01:30 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:58.375 20:01:30 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:58.375 ************************************ 00:05:58.375 START TEST locking_overlapped_coremask_via_rpc 00:05:58.375 ************************************ 00:05:58.375 20:01:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:05:58.375 20:01:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59353 00:05:58.375 20:01:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59353 /var/tmp/spdk.sock 00:05:58.375 20:01:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:58.375 20:01:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59353 ']' 00:05:58.375 20:01:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:58.375 20:01:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:58.375 20:01:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:58.375 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:58.375 20:01:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:58.375 20:01:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:58.375 [2024-12-08 20:01:30.117536] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:05:58.375 [2024-12-08 20:01:30.117764] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59353 ] 00:05:58.375 [2024-12-08 20:01:30.290765] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:58.375 [2024-12-08 20:01:30.290820] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:58.635 [2024-12-08 20:01:30.405426] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:58.635 [2024-12-08 20:01:30.405605] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.635 [2024-12-08 20:01:30.405657] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:59.573 20:01:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:59.573 20:01:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:59.573 20:01:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:59.573 20:01:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59371 00:05:59.573 20:01:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59371 /var/tmp/spdk2.sock 00:05:59.573 20:01:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59371 ']' 00:05:59.573 20:01:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:59.573 20:01:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:59.573 20:01:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:59.573 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:59.573 20:01:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:59.573 20:01:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:59.573 [2024-12-08 20:01:31.323618] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:05:59.573 [2024-12-08 20:01:31.323826] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59371 ] 00:05:59.573 [2024-12-08 20:01:31.492830] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:59.573 [2024-12-08 20:01:31.492905] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:59.832 [2024-12-08 20:01:31.796091] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:59.832 [2024-12-08 20:01:31.796178] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:05:59.832 [2024-12-08 20:01:31.796146] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:02.370 20:01:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:02.370 20:01:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:02.370 20:01:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:02.371 20:01:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:02.371 20:01:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:02.371 20:01:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:02.371 20:01:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:02.371 20:01:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:02.371 20:01:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:02.371 20:01:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:02.371 20:01:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:02.371 20:01:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:02.371 20:01:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:02.371 20:01:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:02.371 20:01:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:02.371 20:01:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:02.371 [2024-12-08 20:01:33.906210] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59353 has claimed it. 00:06:02.371 request: 00:06:02.371 { 00:06:02.371 "method": "framework_enable_cpumask_locks", 00:06:02.371 "req_id": 1 00:06:02.371 } 00:06:02.371 Got JSON-RPC error response 00:06:02.371 response: 00:06:02.371 { 00:06:02.371 "code": -32603, 00:06:02.371 "message": "Failed to claim CPU core: 2" 00:06:02.371 } 00:06:02.371 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:02.371 20:01:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:02.371 20:01:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:02.371 20:01:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:02.371 20:01:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:02.371 20:01:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:02.371 20:01:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59353 /var/tmp/spdk.sock 00:06:02.371 20:01:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59353 ']' 00:06:02.371 20:01:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:02.371 20:01:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:02.371 20:01:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:02.371 20:01:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:02.371 20:01:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:02.371 20:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:02.371 20:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:02.371 20:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59371 /var/tmp/spdk2.sock 00:06:02.371 20:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59371 ']' 00:06:02.371 20:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:02.371 20:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:02.371 20:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:02.371 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:02.371 20:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:02.371 20:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:02.371 20:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:02.371 20:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:02.371 20:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:02.371 20:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:02.371 20:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:02.371 20:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:02.371 00:06:02.371 real 0m4.316s 00:06:02.371 user 0m1.263s 00:06:02.371 sys 0m0.186s 00:06:02.371 20:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:02.371 20:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:02.371 ************************************ 00:06:02.371 END TEST locking_overlapped_coremask_via_rpc 00:06:02.371 ************************************ 00:06:02.658 20:01:34 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:02.658 20:01:34 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59353 ]] 00:06:02.658 20:01:34 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59353 00:06:02.658 20:01:34 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59353 ']' 00:06:02.658 20:01:34 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59353 00:06:02.658 20:01:34 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:02.658 20:01:34 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:02.658 20:01:34 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59353 00:06:02.658 20:01:34 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:02.658 20:01:34 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:02.658 20:01:34 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59353' 00:06:02.658 killing process with pid 59353 00:06:02.658 20:01:34 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59353 00:06:02.658 20:01:34 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59353 00:06:05.192 20:01:37 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59371 ]] 00:06:05.192 20:01:37 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59371 00:06:05.192 20:01:37 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59371 ']' 00:06:05.192 20:01:37 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59371 00:06:05.192 20:01:37 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:05.192 20:01:37 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:05.192 20:01:37 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59371 00:06:05.192 killing process with pid 59371 00:06:05.192 20:01:37 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:05.192 20:01:37 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:05.192 20:01:37 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59371' 00:06:05.192 20:01:37 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59371 00:06:05.192 20:01:37 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59371 00:06:08.483 20:01:39 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:08.483 20:01:39 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:08.483 20:01:39 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59353 ]] 00:06:08.483 20:01:39 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59353 00:06:08.483 Process with pid 59353 is not found 00:06:08.483 20:01:39 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59353 ']' 00:06:08.483 20:01:39 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59353 00:06:08.483 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59353) - No such process 00:06:08.483 20:01:39 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59353 is not found' 00:06:08.483 20:01:39 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59371 ]] 00:06:08.483 20:01:39 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59371 00:06:08.483 20:01:39 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59371 ']' 00:06:08.483 20:01:39 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59371 00:06:08.483 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59371) - No such process 00:06:08.483 20:01:39 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59371 is not found' 00:06:08.483 Process with pid 59371 is not found 00:06:08.483 20:01:39 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:08.483 00:06:08.483 real 0m49.809s 00:06:08.483 user 1m26.000s 00:06:08.483 sys 0m6.604s 00:06:08.483 20:01:39 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:08.483 20:01:39 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:08.483 ************************************ 00:06:08.483 END TEST cpu_locks 00:06:08.483 ************************************ 00:06:08.483 00:06:08.483 real 1m19.308s 00:06:08.483 user 2m22.488s 00:06:08.483 sys 0m10.499s 00:06:08.483 20:01:39 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:08.483 20:01:39 event -- common/autotest_common.sh@10 -- # set +x 00:06:08.483 ************************************ 00:06:08.483 END TEST event 00:06:08.483 ************************************ 00:06:08.483 20:01:39 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:08.483 20:01:39 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:08.483 20:01:39 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:08.483 20:01:39 -- common/autotest_common.sh@10 -- # set +x 00:06:08.483 ************************************ 00:06:08.483 START TEST thread 00:06:08.483 ************************************ 00:06:08.483 20:01:39 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:08.483 * Looking for test storage... 00:06:08.483 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:08.483 20:01:40 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:08.483 20:01:40 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:06:08.483 20:01:40 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:08.483 20:01:40 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:08.483 20:01:40 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:08.483 20:01:40 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:08.483 20:01:40 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:08.483 20:01:40 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:08.483 20:01:40 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:08.483 20:01:40 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:08.483 20:01:40 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:08.483 20:01:40 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:08.483 20:01:40 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:08.483 20:01:40 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:08.483 20:01:40 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:08.483 20:01:40 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:08.483 20:01:40 thread -- scripts/common.sh@345 -- # : 1 00:06:08.483 20:01:40 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:08.483 20:01:40 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:08.483 20:01:40 thread -- scripts/common.sh@365 -- # decimal 1 00:06:08.483 20:01:40 thread -- scripts/common.sh@353 -- # local d=1 00:06:08.483 20:01:40 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:08.483 20:01:40 thread -- scripts/common.sh@355 -- # echo 1 00:06:08.483 20:01:40 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:08.483 20:01:40 thread -- scripts/common.sh@366 -- # decimal 2 00:06:08.483 20:01:40 thread -- scripts/common.sh@353 -- # local d=2 00:06:08.483 20:01:40 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:08.483 20:01:40 thread -- scripts/common.sh@355 -- # echo 2 00:06:08.483 20:01:40 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:08.483 20:01:40 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:08.483 20:01:40 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:08.483 20:01:40 thread -- scripts/common.sh@368 -- # return 0 00:06:08.483 20:01:40 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:08.483 20:01:40 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:08.483 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.483 --rc genhtml_branch_coverage=1 00:06:08.483 --rc genhtml_function_coverage=1 00:06:08.483 --rc genhtml_legend=1 00:06:08.483 --rc geninfo_all_blocks=1 00:06:08.483 --rc geninfo_unexecuted_blocks=1 00:06:08.483 00:06:08.483 ' 00:06:08.484 20:01:40 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:08.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.484 --rc genhtml_branch_coverage=1 00:06:08.484 --rc genhtml_function_coverage=1 00:06:08.484 --rc genhtml_legend=1 00:06:08.484 --rc geninfo_all_blocks=1 00:06:08.484 --rc geninfo_unexecuted_blocks=1 00:06:08.484 00:06:08.484 ' 00:06:08.484 20:01:40 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:08.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.484 --rc genhtml_branch_coverage=1 00:06:08.484 --rc genhtml_function_coverage=1 00:06:08.484 --rc genhtml_legend=1 00:06:08.484 --rc geninfo_all_blocks=1 00:06:08.484 --rc geninfo_unexecuted_blocks=1 00:06:08.484 00:06:08.484 ' 00:06:08.484 20:01:40 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:08.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.484 --rc genhtml_branch_coverage=1 00:06:08.484 --rc genhtml_function_coverage=1 00:06:08.484 --rc genhtml_legend=1 00:06:08.484 --rc geninfo_all_blocks=1 00:06:08.484 --rc geninfo_unexecuted_blocks=1 00:06:08.484 00:06:08.484 ' 00:06:08.484 20:01:40 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:08.484 20:01:40 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:08.484 20:01:40 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:08.484 20:01:40 thread -- common/autotest_common.sh@10 -- # set +x 00:06:08.484 ************************************ 00:06:08.484 START TEST thread_poller_perf 00:06:08.484 ************************************ 00:06:08.484 20:01:40 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:08.484 [2024-12-08 20:01:40.208132] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:06:08.484 [2024-12-08 20:01:40.208317] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59577 ] 00:06:08.484 [2024-12-08 20:01:40.378778] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.743 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:08.743 [2024-12-08 20:01:40.519087] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.185 [2024-12-08T20:01:42.163Z] ====================================== 00:06:10.185 [2024-12-08T20:01:42.163Z] busy:2299966382 (cyc) 00:06:10.185 [2024-12-08T20:01:42.163Z] total_run_count: 412000 00:06:10.185 [2024-12-08T20:01:42.163Z] tsc_hz: 2290000000 (cyc) 00:06:10.185 [2024-12-08T20:01:42.163Z] ====================================== 00:06:10.185 [2024-12-08T20:01:42.163Z] poller_cost: 5582 (cyc), 2437 (nsec) 00:06:10.185 ************************************ 00:06:10.185 END TEST thread_poller_perf 00:06:10.185 ************************************ 00:06:10.185 00:06:10.186 real 0m1.606s 00:06:10.186 user 0m1.388s 00:06:10.186 sys 0m0.111s 00:06:10.186 20:01:41 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:10.186 20:01:41 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:10.186 20:01:41 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:10.186 20:01:41 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:10.186 20:01:41 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:10.186 20:01:41 thread -- common/autotest_common.sh@10 -- # set +x 00:06:10.186 ************************************ 00:06:10.186 START TEST thread_poller_perf 00:06:10.186 ************************************ 00:06:10.186 20:01:41 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:10.186 [2024-12-08 20:01:41.882661] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:06:10.186 [2024-12-08 20:01:41.882795] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59608 ] 00:06:10.186 [2024-12-08 20:01:42.053326] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.445 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:10.445 [2024-12-08 20:01:42.193868] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.819 [2024-12-08T20:01:43.797Z] ====================================== 00:06:11.819 [2024-12-08T20:01:43.797Z] busy:2293658142 (cyc) 00:06:11.819 [2024-12-08T20:01:43.797Z] total_run_count: 4864000 00:06:11.819 [2024-12-08T20:01:43.797Z] tsc_hz: 2290000000 (cyc) 00:06:11.819 [2024-12-08T20:01:43.797Z] ====================================== 00:06:11.819 [2024-12-08T20:01:43.797Z] poller_cost: 471 (cyc), 205 (nsec) 00:06:11.819 00:06:11.819 real 0m1.605s 00:06:11.819 user 0m1.381s 00:06:11.819 sys 0m0.117s 00:06:11.819 ************************************ 00:06:11.819 END TEST thread_poller_perf 00:06:11.819 ************************************ 00:06:11.819 20:01:43 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:11.819 20:01:43 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:11.819 20:01:43 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:11.819 ************************************ 00:06:11.819 END TEST thread 00:06:11.819 ************************************ 00:06:11.819 00:06:11.819 real 0m3.564s 00:06:11.819 user 0m2.923s 00:06:11.819 sys 0m0.436s 00:06:11.819 20:01:43 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:11.819 20:01:43 thread -- common/autotest_common.sh@10 -- # set +x 00:06:11.819 20:01:43 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:11.819 20:01:43 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:11.819 20:01:43 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:11.819 20:01:43 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:11.819 20:01:43 -- common/autotest_common.sh@10 -- # set +x 00:06:11.819 ************************************ 00:06:11.819 START TEST app_cmdline 00:06:11.819 ************************************ 00:06:11.819 20:01:43 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:11.820 * Looking for test storage... 00:06:11.820 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:11.820 20:01:43 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:11.820 20:01:43 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:06:11.820 20:01:43 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:11.820 20:01:43 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:11.820 20:01:43 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:11.820 20:01:43 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:11.820 20:01:43 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:11.820 20:01:43 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:11.820 20:01:43 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:11.820 20:01:43 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:11.820 20:01:43 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:11.820 20:01:43 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:11.820 20:01:43 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:11.820 20:01:43 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:11.820 20:01:43 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:11.820 20:01:43 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:11.820 20:01:43 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:11.820 20:01:43 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:11.820 20:01:43 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:11.820 20:01:43 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:11.820 20:01:43 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:11.820 20:01:43 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:11.820 20:01:43 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:11.820 20:01:43 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:11.820 20:01:43 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:11.820 20:01:43 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:11.820 20:01:43 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:11.820 20:01:43 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:11.820 20:01:43 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:11.820 20:01:43 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:11.820 20:01:43 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:11.820 20:01:43 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:11.820 20:01:43 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:11.820 20:01:43 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:11.820 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.820 --rc genhtml_branch_coverage=1 00:06:11.820 --rc genhtml_function_coverage=1 00:06:11.820 --rc genhtml_legend=1 00:06:11.820 --rc geninfo_all_blocks=1 00:06:11.820 --rc geninfo_unexecuted_blocks=1 00:06:11.820 00:06:11.820 ' 00:06:11.820 20:01:43 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:11.820 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.820 --rc genhtml_branch_coverage=1 00:06:11.820 --rc genhtml_function_coverage=1 00:06:11.820 --rc genhtml_legend=1 00:06:11.820 --rc geninfo_all_blocks=1 00:06:11.820 --rc geninfo_unexecuted_blocks=1 00:06:11.820 00:06:11.820 ' 00:06:11.820 20:01:43 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:11.820 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.820 --rc genhtml_branch_coverage=1 00:06:11.820 --rc genhtml_function_coverage=1 00:06:11.820 --rc genhtml_legend=1 00:06:11.820 --rc geninfo_all_blocks=1 00:06:11.820 --rc geninfo_unexecuted_blocks=1 00:06:11.820 00:06:11.820 ' 00:06:11.820 20:01:43 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:11.820 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.820 --rc genhtml_branch_coverage=1 00:06:11.820 --rc genhtml_function_coverage=1 00:06:11.820 --rc genhtml_legend=1 00:06:11.820 --rc geninfo_all_blocks=1 00:06:11.820 --rc geninfo_unexecuted_blocks=1 00:06:11.820 00:06:11.820 ' 00:06:11.820 20:01:43 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:11.820 20:01:43 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59697 00:06:11.820 20:01:43 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:11.820 20:01:43 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59697 00:06:11.820 20:01:43 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 59697 ']' 00:06:11.820 20:01:43 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:11.820 20:01:43 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:11.820 20:01:43 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:11.820 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:11.820 20:01:43 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:11.820 20:01:43 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:12.079 [2024-12-08 20:01:43.889633] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:06:12.079 [2024-12-08 20:01:43.889897] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59697 ] 00:06:12.337 [2024-12-08 20:01:44.072467] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.337 [2024-12-08 20:01:44.215475] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.273 20:01:45 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:13.273 20:01:45 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:06:13.273 20:01:45 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:13.532 { 00:06:13.532 "version": "SPDK v25.01-pre git sha1 a2f5e1c2d", 00:06:13.532 "fields": { 00:06:13.532 "major": 25, 00:06:13.532 "minor": 1, 00:06:13.532 "patch": 0, 00:06:13.532 "suffix": "-pre", 00:06:13.532 "commit": "a2f5e1c2d" 00:06:13.532 } 00:06:13.532 } 00:06:13.532 20:01:45 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:13.532 20:01:45 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:13.532 20:01:45 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:13.532 20:01:45 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:13.532 20:01:45 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:13.532 20:01:45 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:13.532 20:01:45 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:13.532 20:01:45 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:13.532 20:01:45 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:13.532 20:01:45 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:13.532 20:01:45 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:13.532 20:01:45 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:13.532 20:01:45 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:13.532 20:01:45 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:06:13.532 20:01:45 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:13.532 20:01:45 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:13.532 20:01:45 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:13.532 20:01:45 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:13.532 20:01:45 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:13.532 20:01:45 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:13.532 20:01:45 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:13.532 20:01:45 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:13.532 20:01:45 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:13.532 20:01:45 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:13.791 request: 00:06:13.791 { 00:06:13.791 "method": "env_dpdk_get_mem_stats", 00:06:13.791 "req_id": 1 00:06:13.791 } 00:06:13.791 Got JSON-RPC error response 00:06:13.791 response: 00:06:13.791 { 00:06:13.791 "code": -32601, 00:06:13.791 "message": "Method not found" 00:06:13.791 } 00:06:13.791 20:01:45 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:06:13.791 20:01:45 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:13.791 20:01:45 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:13.791 20:01:45 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:13.791 20:01:45 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59697 00:06:13.791 20:01:45 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 59697 ']' 00:06:13.791 20:01:45 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 59697 00:06:13.791 20:01:45 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:06:13.791 20:01:45 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:13.791 20:01:45 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59697 00:06:13.791 20:01:45 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:13.791 killing process with pid 59697 00:06:13.791 20:01:45 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:13.791 20:01:45 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59697' 00:06:13.791 20:01:45 app_cmdline -- common/autotest_common.sh@973 -- # kill 59697 00:06:13.791 20:01:45 app_cmdline -- common/autotest_common.sh@978 -- # wait 59697 00:06:16.352 00:06:16.352 real 0m4.746s 00:06:16.352 user 0m4.744s 00:06:16.352 sys 0m0.794s 00:06:16.352 ************************************ 00:06:16.352 END TEST app_cmdline 00:06:16.352 ************************************ 00:06:16.352 20:01:48 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:16.352 20:01:48 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:16.611 20:01:48 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:16.611 20:01:48 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:16.611 20:01:48 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:16.611 20:01:48 -- common/autotest_common.sh@10 -- # set +x 00:06:16.611 ************************************ 00:06:16.611 START TEST version 00:06:16.611 ************************************ 00:06:16.611 20:01:48 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:16.611 * Looking for test storage... 00:06:16.611 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:16.611 20:01:48 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:16.611 20:01:48 version -- common/autotest_common.sh@1711 -- # lcov --version 00:06:16.611 20:01:48 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:16.611 20:01:48 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:16.612 20:01:48 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:16.612 20:01:48 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:16.612 20:01:48 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:16.612 20:01:48 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:16.612 20:01:48 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:16.612 20:01:48 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:16.612 20:01:48 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:16.612 20:01:48 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:16.612 20:01:48 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:16.612 20:01:48 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:16.612 20:01:48 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:16.612 20:01:48 version -- scripts/common.sh@344 -- # case "$op" in 00:06:16.612 20:01:48 version -- scripts/common.sh@345 -- # : 1 00:06:16.612 20:01:48 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:16.612 20:01:48 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:16.612 20:01:48 version -- scripts/common.sh@365 -- # decimal 1 00:06:16.612 20:01:48 version -- scripts/common.sh@353 -- # local d=1 00:06:16.612 20:01:48 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:16.612 20:01:48 version -- scripts/common.sh@355 -- # echo 1 00:06:16.612 20:01:48 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:16.612 20:01:48 version -- scripts/common.sh@366 -- # decimal 2 00:06:16.612 20:01:48 version -- scripts/common.sh@353 -- # local d=2 00:06:16.612 20:01:48 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:16.612 20:01:48 version -- scripts/common.sh@355 -- # echo 2 00:06:16.612 20:01:48 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:16.612 20:01:48 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:16.612 20:01:48 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:16.612 20:01:48 version -- scripts/common.sh@368 -- # return 0 00:06:16.612 20:01:48 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:16.612 20:01:48 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:16.612 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.612 --rc genhtml_branch_coverage=1 00:06:16.612 --rc genhtml_function_coverage=1 00:06:16.612 --rc genhtml_legend=1 00:06:16.612 --rc geninfo_all_blocks=1 00:06:16.612 --rc geninfo_unexecuted_blocks=1 00:06:16.612 00:06:16.612 ' 00:06:16.612 20:01:48 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:16.612 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.612 --rc genhtml_branch_coverage=1 00:06:16.612 --rc genhtml_function_coverage=1 00:06:16.612 --rc genhtml_legend=1 00:06:16.612 --rc geninfo_all_blocks=1 00:06:16.612 --rc geninfo_unexecuted_blocks=1 00:06:16.612 00:06:16.612 ' 00:06:16.612 20:01:48 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:16.612 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.612 --rc genhtml_branch_coverage=1 00:06:16.612 --rc genhtml_function_coverage=1 00:06:16.612 --rc genhtml_legend=1 00:06:16.612 --rc geninfo_all_blocks=1 00:06:16.612 --rc geninfo_unexecuted_blocks=1 00:06:16.612 00:06:16.612 ' 00:06:16.612 20:01:48 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:16.612 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.612 --rc genhtml_branch_coverage=1 00:06:16.612 --rc genhtml_function_coverage=1 00:06:16.612 --rc genhtml_legend=1 00:06:16.612 --rc geninfo_all_blocks=1 00:06:16.612 --rc geninfo_unexecuted_blocks=1 00:06:16.612 00:06:16.612 ' 00:06:16.612 20:01:48 version -- app/version.sh@17 -- # get_header_version major 00:06:16.612 20:01:48 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:16.612 20:01:48 version -- app/version.sh@14 -- # cut -f2 00:06:16.612 20:01:48 version -- app/version.sh@14 -- # tr -d '"' 00:06:16.612 20:01:48 version -- app/version.sh@17 -- # major=25 00:06:16.612 20:01:48 version -- app/version.sh@18 -- # get_header_version minor 00:06:16.874 20:01:48 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:16.874 20:01:48 version -- app/version.sh@14 -- # cut -f2 00:06:16.874 20:01:48 version -- app/version.sh@14 -- # tr -d '"' 00:06:16.874 20:01:48 version -- app/version.sh@18 -- # minor=1 00:06:16.874 20:01:48 version -- app/version.sh@19 -- # get_header_version patch 00:06:16.874 20:01:48 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:16.874 20:01:48 version -- app/version.sh@14 -- # cut -f2 00:06:16.874 20:01:48 version -- app/version.sh@14 -- # tr -d '"' 00:06:16.874 20:01:48 version -- app/version.sh@19 -- # patch=0 00:06:16.874 20:01:48 version -- app/version.sh@20 -- # get_header_version suffix 00:06:16.874 20:01:48 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:16.874 20:01:48 version -- app/version.sh@14 -- # cut -f2 00:06:16.874 20:01:48 version -- app/version.sh@14 -- # tr -d '"' 00:06:16.874 20:01:48 version -- app/version.sh@20 -- # suffix=-pre 00:06:16.874 20:01:48 version -- app/version.sh@22 -- # version=25.1 00:06:16.874 20:01:48 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:16.874 20:01:48 version -- app/version.sh@28 -- # version=25.1rc0 00:06:16.874 20:01:48 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:16.874 20:01:48 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:16.874 20:01:48 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:16.874 20:01:48 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:16.874 00:06:16.874 real 0m0.309s 00:06:16.874 user 0m0.171s 00:06:16.874 sys 0m0.195s 00:06:16.874 ************************************ 00:06:16.874 END TEST version 00:06:16.874 ************************************ 00:06:16.874 20:01:48 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:16.874 20:01:48 version -- common/autotest_common.sh@10 -- # set +x 00:06:16.874 20:01:48 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:16.874 20:01:48 -- spdk/autotest.sh@188 -- # [[ 1 -eq 1 ]] 00:06:16.874 20:01:48 -- spdk/autotest.sh@189 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:06:16.874 20:01:48 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:16.874 20:01:48 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:16.874 20:01:48 -- common/autotest_common.sh@10 -- # set +x 00:06:16.874 ************************************ 00:06:16.874 START TEST bdev_raid 00:06:16.874 ************************************ 00:06:16.874 20:01:48 bdev_raid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:06:16.874 * Looking for test storage... 00:06:17.133 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:17.133 20:01:48 bdev_raid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:17.133 20:01:48 bdev_raid -- common/autotest_common.sh@1711 -- # lcov --version 00:06:17.133 20:01:48 bdev_raid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:17.133 20:01:48 bdev_raid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:17.133 20:01:48 bdev_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:17.133 20:01:48 bdev_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:17.133 20:01:48 bdev_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:17.133 20:01:48 bdev_raid -- scripts/common.sh@336 -- # IFS=.-: 00:06:17.133 20:01:48 bdev_raid -- scripts/common.sh@336 -- # read -ra ver1 00:06:17.133 20:01:48 bdev_raid -- scripts/common.sh@337 -- # IFS=.-: 00:06:17.133 20:01:48 bdev_raid -- scripts/common.sh@337 -- # read -ra ver2 00:06:17.133 20:01:48 bdev_raid -- scripts/common.sh@338 -- # local 'op=<' 00:06:17.133 20:01:48 bdev_raid -- scripts/common.sh@340 -- # ver1_l=2 00:06:17.133 20:01:48 bdev_raid -- scripts/common.sh@341 -- # ver2_l=1 00:06:17.133 20:01:48 bdev_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:17.133 20:01:48 bdev_raid -- scripts/common.sh@344 -- # case "$op" in 00:06:17.133 20:01:48 bdev_raid -- scripts/common.sh@345 -- # : 1 00:06:17.133 20:01:48 bdev_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:17.133 20:01:48 bdev_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:17.133 20:01:48 bdev_raid -- scripts/common.sh@365 -- # decimal 1 00:06:17.133 20:01:48 bdev_raid -- scripts/common.sh@353 -- # local d=1 00:06:17.133 20:01:48 bdev_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:17.133 20:01:48 bdev_raid -- scripts/common.sh@355 -- # echo 1 00:06:17.133 20:01:48 bdev_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:06:17.133 20:01:48 bdev_raid -- scripts/common.sh@366 -- # decimal 2 00:06:17.133 20:01:48 bdev_raid -- scripts/common.sh@353 -- # local d=2 00:06:17.133 20:01:48 bdev_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:17.133 20:01:48 bdev_raid -- scripts/common.sh@355 -- # echo 2 00:06:17.133 20:01:48 bdev_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:06:17.133 20:01:48 bdev_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:17.133 20:01:48 bdev_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:17.133 20:01:48 bdev_raid -- scripts/common.sh@368 -- # return 0 00:06:17.133 20:01:48 bdev_raid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:17.133 20:01:48 bdev_raid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:17.133 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.133 --rc genhtml_branch_coverage=1 00:06:17.133 --rc genhtml_function_coverage=1 00:06:17.133 --rc genhtml_legend=1 00:06:17.133 --rc geninfo_all_blocks=1 00:06:17.133 --rc geninfo_unexecuted_blocks=1 00:06:17.133 00:06:17.133 ' 00:06:17.133 20:01:48 bdev_raid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:17.133 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.133 --rc genhtml_branch_coverage=1 00:06:17.133 --rc genhtml_function_coverage=1 00:06:17.133 --rc genhtml_legend=1 00:06:17.133 --rc geninfo_all_blocks=1 00:06:17.133 --rc geninfo_unexecuted_blocks=1 00:06:17.133 00:06:17.133 ' 00:06:17.134 20:01:48 bdev_raid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:17.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.134 --rc genhtml_branch_coverage=1 00:06:17.134 --rc genhtml_function_coverage=1 00:06:17.134 --rc genhtml_legend=1 00:06:17.134 --rc geninfo_all_blocks=1 00:06:17.134 --rc geninfo_unexecuted_blocks=1 00:06:17.134 00:06:17.134 ' 00:06:17.134 20:01:48 bdev_raid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:17.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.134 --rc genhtml_branch_coverage=1 00:06:17.134 --rc genhtml_function_coverage=1 00:06:17.134 --rc genhtml_legend=1 00:06:17.134 --rc geninfo_all_blocks=1 00:06:17.134 --rc geninfo_unexecuted_blocks=1 00:06:17.134 00:06:17.134 ' 00:06:17.134 20:01:48 bdev_raid -- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:17.134 20:01:48 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:06:17.134 20:01:48 bdev_raid -- bdev/bdev_raid.sh@14 -- # rpc_py=rpc_cmd 00:06:17.134 20:01:48 bdev_raid -- bdev/bdev_raid.sh@946 -- # mkdir -p /raidtest 00:06:17.134 20:01:48 bdev_raid -- bdev/bdev_raid.sh@947 -- # trap 'cleanup; exit 1' EXIT 00:06:17.134 20:01:48 bdev_raid -- bdev/bdev_raid.sh@949 -- # base_blocklen=512 00:06:17.134 20:01:48 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid1_resize_data_offset_test raid_resize_data_offset_test 00:06:17.134 20:01:48 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:17.134 20:01:48 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:17.134 20:01:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:17.134 ************************************ 00:06:17.134 START TEST raid1_resize_data_offset_test 00:06:17.134 ************************************ 00:06:17.134 20:01:48 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1129 -- # raid_resize_data_offset_test 00:06:17.134 20:01:48 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@917 -- # raid_pid=59892 00:06:17.134 20:01:48 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@918 -- # echo 'Process raid pid: 59892' 00:06:17.134 Process raid pid: 59892 00:06:17.134 20:01:48 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@919 -- # waitforlisten 59892 00:06:17.134 20:01:48 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@916 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:17.134 20:01:48 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@835 -- # '[' -z 59892 ']' 00:06:17.134 20:01:48 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:17.134 20:01:48 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:17.134 20:01:48 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:17.134 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:17.134 20:01:48 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:17.134 20:01:48 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:17.134 [2024-12-08 20:01:49.072793] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:06:17.134 [2024-12-08 20:01:49.072992] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:17.392 [2024-12-08 20:01:49.247023] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.652 [2024-12-08 20:01:49.388272] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.652 [2024-12-08 20:01:49.628623] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:17.652 [2024-12-08 20:01:49.628735] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:18.219 20:01:49 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:18.219 20:01:49 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@868 -- # return 0 00:06:18.219 20:01:49 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@922 -- # rpc_cmd bdev_malloc_create -b malloc0 64 512 -o 16 00:06:18.219 20:01:49 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:18.219 20:01:49 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:18.219 malloc0 00:06:18.219 20:01:49 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:18.219 20:01:49 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@923 -- # rpc_cmd bdev_malloc_create -b malloc1 64 512 -o 16 00:06:18.219 20:01:49 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:18.219 20:01:49 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:18.219 malloc1 00:06:18.219 20:01:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:18.219 20:01:50 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@924 -- # rpc_cmd bdev_null_create null0 64 512 00:06:18.219 20:01:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:18.219 20:01:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:18.219 null0 00:06:18.219 20:01:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:18.219 20:01:50 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@926 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''malloc0 malloc1 null0'\''' -s 00:06:18.219 20:01:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:18.219 20:01:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:18.219 [2024-12-08 20:01:50.105226] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc0 is claimed 00:06:18.219 [2024-12-08 20:01:50.107313] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:06:18.219 [2024-12-08 20:01:50.107483] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev null0 is claimed 00:06:18.219 [2024-12-08 20:01:50.107673] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:18.219 [2024-12-08 20:01:50.107689] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 129024, blocklen 512 00:06:18.219 [2024-12-08 20:01:50.107987] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:06:18.219 [2024-12-08 20:01:50.108154] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:18.219 [2024-12-08 20:01:50.108169] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:06:18.219 [2024-12-08 20:01:50.108373] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:18.219 20:01:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:18.219 20:01:50 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:18.220 20:01:50 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:06:18.220 20:01:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:18.220 20:01:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:18.220 20:01:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:18.220 20:01:50 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # (( 2048 == 2048 )) 00:06:18.220 20:01:50 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@931 -- # rpc_cmd bdev_null_delete null0 00:06:18.220 20:01:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:18.220 20:01:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:18.220 [2024-12-08 20:01:50.165097] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: null0 00:06:18.220 20:01:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:18.220 20:01:50 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@935 -- # rpc_cmd bdev_malloc_create -b malloc2 512 512 -o 30 00:06:18.220 20:01:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:18.220 20:01:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:19.153 malloc2 00:06:19.153 20:01:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:19.153 20:01:50 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@936 -- # rpc_cmd bdev_raid_add_base_bdev Raid malloc2 00:06:19.153 20:01:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:19.153 20:01:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:19.153 [2024-12-08 20:01:50.778972] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:06:19.153 [2024-12-08 20:01:50.797330] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:19.153 20:01:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:19.153 [2024-12-08 20:01:50.799520] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev Raid 00:06:19.153 20:01:50 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:19.153 20:01:50 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:06:19.154 20:01:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:19.154 20:01:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:19.154 20:01:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:19.154 20:01:50 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # (( 2070 == 2070 )) 00:06:19.154 20:01:50 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@941 -- # killprocess 59892 00:06:19.154 20:01:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@954 -- # '[' -z 59892 ']' 00:06:19.154 20:01:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@958 -- # kill -0 59892 00:06:19.154 20:01:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@959 -- # uname 00:06:19.154 20:01:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:19.154 20:01:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59892 00:06:19.154 killing process with pid 59892 00:06:19.154 20:01:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:19.154 20:01:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:19.154 20:01:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59892' 00:06:19.154 20:01:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@973 -- # kill 59892 00:06:19.154 [2024-12-08 20:01:50.889174] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:19.154 20:01:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@978 -- # wait 59892 00:06:19.154 [2024-12-08 20:01:50.889369] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev Raid: Operation canceled 00:06:19.154 [2024-12-08 20:01:50.889423] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:19.154 [2024-12-08 20:01:50.889442] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: malloc2 00:06:19.154 [2024-12-08 20:01:50.926439] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:19.154 [2024-12-08 20:01:50.926816] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:19.154 [2024-12-08 20:01:50.926834] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:06:21.053 [2024-12-08 20:01:52.909632] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:22.428 ************************************ 00:06:22.428 END TEST raid1_resize_data_offset_test 00:06:22.428 ************************************ 00:06:22.428 20:01:54 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@943 -- # return 0 00:06:22.428 00:06:22.428 real 0m5.209s 00:06:22.428 user 0m4.888s 00:06:22.428 sys 0m0.756s 00:06:22.428 20:01:54 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:22.428 20:01:54 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:22.428 20:01:54 bdev_raid -- bdev/bdev_raid.sh@953 -- # run_test raid0_resize_superblock_test raid_resize_superblock_test 0 00:06:22.428 20:01:54 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:22.428 20:01:54 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:22.428 20:01:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:22.428 ************************************ 00:06:22.428 START TEST raid0_resize_superblock_test 00:06:22.428 ************************************ 00:06:22.428 Process raid pid: 59981 00:06:22.428 20:01:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1129 -- # raid_resize_superblock_test 0 00:06:22.428 20:01:54 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=0 00:06:22.428 20:01:54 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=59981 00:06:22.428 20:01:54 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 59981' 00:06:22.428 20:01:54 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:22.428 20:01:54 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 59981 00:06:22.428 20:01:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 59981 ']' 00:06:22.428 20:01:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:22.428 20:01:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:22.428 20:01:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:22.428 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:22.428 20:01:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:22.428 20:01:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:22.428 [2024-12-08 20:01:54.360230] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:06:22.428 [2024-12-08 20:01:54.360494] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:22.687 [2024-12-08 20:01:54.548571] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.947 [2024-12-08 20:01:54.691876] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.206 [2024-12-08 20:01:54.948060] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:23.206 [2024-12-08 20:01:54.948150] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:23.465 20:01:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:23.465 20:01:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:06:23.465 20:01:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:06:23.465 20:01:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:23.465 20:01:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:24.038 malloc0 00:06:24.038 20:01:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:24.038 20:01:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:06:24.038 20:01:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:24.038 20:01:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:24.038 [2024-12-08 20:01:55.738856] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:24.038 [2024-12-08 20:01:55.738974] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:24.038 [2024-12-08 20:01:55.739029] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:06:24.038 [2024-12-08 20:01:55.739105] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:24.038 [2024-12-08 20:01:55.741406] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:24.038 [2024-12-08 20:01:55.741512] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:24.038 pt0 00:06:24.038 20:01:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:24.038 20:01:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:06:24.038 20:01:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:24.038 20:01:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:24.038 3e446ad2-b1d9-4e55-a14a-d2cfd2296456 00:06:24.038 20:01:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:24.038 20:01:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:06:24.038 20:01:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:24.038 20:01:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:24.038 9765ca68-593e-4fa8-92b9-b28942778b94 00:06:24.038 20:01:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:24.038 20:01:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:06:24.038 20:01:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:24.038 20:01:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:24.038 5a2223fc-d825-452c-be50-06291158f541 00:06:24.038 20:01:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:24.038 20:01:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:06:24.038 20:01:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@870 -- # rpc_cmd bdev_raid_create -n Raid -r 0 -z 64 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:06:24.038 20:01:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:24.038 20:01:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:24.038 [2024-12-08 20:01:55.857615] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 9765ca68-593e-4fa8-92b9-b28942778b94 is claimed 00:06:24.038 [2024-12-08 20:01:55.857699] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 5a2223fc-d825-452c-be50-06291158f541 is claimed 00:06:24.038 [2024-12-08 20:01:55.857825] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:24.038 [2024-12-08 20:01:55.857839] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 245760, blocklen 512 00:06:24.039 [2024-12-08 20:01:55.858120] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:24.039 [2024-12-08 20:01:55.858330] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:24.039 [2024-12-08 20:01:55.858343] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:06:24.039 [2024-12-08 20:01:55.858488] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:24.039 20:01:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:24.039 20:01:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:06:24.039 20:01:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:06:24.039 20:01:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:24.039 20:01:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:24.039 20:01:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:24.039 20:01:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:06:24.039 20:01:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:06:24.039 20:01:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:24.039 20:01:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:24.039 20:01:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:06:24.039 20:01:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:24.039 20:01:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:06:24.039 20:01:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:24.039 20:01:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:24.039 20:01:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:24.039 20:01:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # jq '.[].num_blocks' 00:06:24.039 20:01:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:24.039 20:01:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:24.039 [2024-12-08 20:01:55.973703] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:24.039 20:01:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:24.039 20:01:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:24.311 20:01:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:24.311 20:01:56 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # (( 245760 == 245760 )) 00:06:24.311 20:01:56 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:06:24.311 20:01:56 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:24.311 20:01:56 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:24.311 [2024-12-08 20:01:56.017634] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:24.311 [2024-12-08 20:01:56.017670] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '9765ca68-593e-4fa8-92b9-b28942778b94' was resized: old size 131072, new size 204800 00:06:24.311 20:01:56 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:24.311 20:01:56 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:06:24.311 20:01:56 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:24.311 20:01:56 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:24.311 [2024-12-08 20:01:56.025565] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:24.311 [2024-12-08 20:01:56.025599] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '5a2223fc-d825-452c-be50-06291158f541' was resized: old size 131072, new size 204800 00:06:24.311 [2024-12-08 20:01:56.025640] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 245760 to 393216 00:06:24.311 20:01:56 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:24.311 20:01:56 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:06:24.311 20:01:56 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:24.311 20:01:56 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:24.311 20:01:56 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:06:24.311 20:01:56 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:24.311 20:01:56 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:06:24.311 20:01:56 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:06:24.311 20:01:56 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:06:24.311 20:01:56 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:24.311 20:01:56 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:24.311 20:01:56 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:24.311 20:01:56 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:06:24.311 20:01:56 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:24.311 20:01:56 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:24.311 20:01:56 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:24.311 20:01:56 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:24.311 20:01:56 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:24.311 20:01:56 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # jq '.[].num_blocks' 00:06:24.311 [2024-12-08 20:01:56.133342] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:24.311 20:01:56 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:24.311 20:01:56 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:24.311 20:01:56 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:24.311 20:01:56 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # (( 393216 == 393216 )) 00:06:24.311 20:01:56 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:06:24.311 20:01:56 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:24.311 20:01:56 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:24.311 [2024-12-08 20:01:56.173085] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:06:24.311 [2024-12-08 20:01:56.173160] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:06:24.311 [2024-12-08 20:01:56.173182] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:24.311 [2024-12-08 20:01:56.173199] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:06:24.311 [2024-12-08 20:01:56.173338] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:24.311 [2024-12-08 20:01:56.173390] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:24.311 [2024-12-08 20:01:56.173411] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:06:24.311 20:01:56 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:24.311 20:01:56 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:06:24.311 20:01:56 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:24.311 20:01:56 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:24.311 [2024-12-08 20:01:56.181007] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:24.311 [2024-12-08 20:01:56.181058] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:24.311 [2024-12-08 20:01:56.181085] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:06:24.311 [2024-12-08 20:01:56.181100] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:24.311 [2024-12-08 20:01:56.183326] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:24.311 [2024-12-08 20:01:56.183405] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:24.311 [2024-12-08 20:01:56.185155] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 9765ca68-593e-4fa8-92b9-b28942778b94 00:06:24.311 [2024-12-08 20:01:56.185223] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 9765ca68-593e-4fa8-92b9-b28942778b94 is claimed 00:06:24.311 [2024-12-08 20:01:56.185323] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 5a2223fc-d825-452c-be50-06291158f541 00:06:24.311 [2024-12-08 20:01:56.185341] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 5a2223fc-d825-452c-be50-06291158f541 is claimed 00:06:24.311 [2024-12-08 20:01:56.185508] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 5a2223fc-d825-452c-be50-06291158f541 (2) smaller than existing raid bdev Raid (3) 00:06:24.311 [2024-12-08 20:01:56.185534] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 9765ca68-593e-4fa8-92b9-b28942778b94: File exists 00:06:24.311 [2024-12-08 20:01:56.185574] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:06:24.311 [2024-12-08 20:01:56.185589] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 393216, blocklen 512 00:06:24.311 pt0 00:06:24.311 [2024-12-08 20:01:56.185891] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:06:24.311 20:01:56 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:24.311 [2024-12-08 20:01:56.186189] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:06:24.311 20:01:56 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:06:24.311 [2024-12-08 20:01:56.186260] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:06:24.311 20:01:56 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:24.311 20:01:56 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:24.311 [2024-12-08 20:01:56.186598] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:24.311 20:01:56 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:24.311 20:01:56 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:24.311 20:01:56 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # jq '.[].num_blocks' 00:06:24.311 20:01:56 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:24.311 20:01:56 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:24.311 20:01:56 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:24.311 20:01:56 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:24.311 [2024-12-08 20:01:56.201396] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:24.311 20:01:56 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:24.311 20:01:56 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:24.311 20:01:56 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:24.312 20:01:56 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # (( 393216 == 393216 )) 00:06:24.312 20:01:56 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 59981 00:06:24.312 20:01:56 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 59981 ']' 00:06:24.312 20:01:56 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@958 -- # kill -0 59981 00:06:24.312 20:01:56 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@959 -- # uname 00:06:24.312 20:01:56 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:24.312 20:01:56 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59981 00:06:24.312 20:01:56 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:24.312 20:01:56 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:24.312 20:01:56 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59981' 00:06:24.312 killing process with pid 59981 00:06:24.312 20:01:56 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@973 -- # kill 59981 00:06:24.312 20:01:56 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@978 -- # wait 59981 00:06:24.312 [2024-12-08 20:01:56.275285] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:24.312 [2024-12-08 20:01:56.275415] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:24.312 [2024-12-08 20:01:56.275522] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:24.312 [2024-12-08 20:01:56.275569] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:06:25.727 [2024-12-08 20:01:57.684560] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:27.100 20:01:58 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:06:27.100 00:06:27.100 real 0m4.531s 00:06:27.100 user 0m4.635s 00:06:27.100 sys 0m0.641s 00:06:27.100 20:01:58 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:27.100 20:01:58 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:27.100 ************************************ 00:06:27.100 END TEST raid0_resize_superblock_test 00:06:27.100 ************************************ 00:06:27.100 20:01:58 bdev_raid -- bdev/bdev_raid.sh@954 -- # run_test raid1_resize_superblock_test raid_resize_superblock_test 1 00:06:27.100 20:01:58 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:27.100 20:01:58 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:27.100 20:01:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:27.100 ************************************ 00:06:27.100 START TEST raid1_resize_superblock_test 00:06:27.100 ************************************ 00:06:27.100 20:01:58 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1129 -- # raid_resize_superblock_test 1 00:06:27.100 20:01:58 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=1 00:06:27.100 20:01:58 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=60080 00:06:27.100 Process raid pid: 60080 00:06:27.100 20:01:58 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:27.100 20:01:58 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 60080' 00:06:27.100 20:01:58 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 60080 00:06:27.100 20:01:58 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 60080 ']' 00:06:27.100 20:01:58 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:27.100 20:01:58 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:27.100 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:27.100 20:01:58 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:27.100 20:01:58 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:27.100 20:01:58 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:27.100 [2024-12-08 20:01:58.954928] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:06:27.100 [2024-12-08 20:01:58.955148] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:27.358 [2024-12-08 20:01:59.125511] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.358 [2024-12-08 20:01:59.233798] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.617 [2024-12-08 20:01:59.432908] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:27.617 [2024-12-08 20:01:59.433034] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:27.876 20:01:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:27.876 20:01:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:06:27.876 20:01:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:06:27.876 20:01:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:27.876 20:01:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:28.442 malloc0 00:06:28.442 20:02:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:28.442 20:02:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:06:28.442 20:02:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:28.442 20:02:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:28.442 [2024-12-08 20:02:00.285229] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:28.442 [2024-12-08 20:02:00.285349] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:28.442 [2024-12-08 20:02:00.285402] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:06:28.442 [2024-12-08 20:02:00.285469] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:28.442 [2024-12-08 20:02:00.287691] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:28.442 [2024-12-08 20:02:00.287770] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:28.442 pt0 00:06:28.442 20:02:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:28.442 20:02:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:06:28.443 20:02:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:28.443 20:02:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:28.443 eade4877-a670-43ab-b37d-cd1473ce315d 00:06:28.443 20:02:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:28.443 20:02:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:06:28.443 20:02:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:28.443 20:02:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:28.443 0e31af02-517c-47d2-9192-77d354599132 00:06:28.443 20:02:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:28.443 20:02:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:06:28.443 20:02:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:28.443 20:02:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:28.443 c866fbec-81e0-4eab-9d66-b4680ff225c2 00:06:28.443 20:02:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:28.443 20:02:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:06:28.443 20:02:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@871 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:06:28.443 20:02:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:28.443 20:02:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:28.443 [2024-12-08 20:02:00.407955] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 0e31af02-517c-47d2-9192-77d354599132 is claimed 00:06:28.443 [2024-12-08 20:02:00.408051] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev c866fbec-81e0-4eab-9d66-b4680ff225c2 is claimed 00:06:28.443 [2024-12-08 20:02:00.408190] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:28.443 [2024-12-08 20:02:00.408205] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 122880, blocklen 512 00:06:28.443 [2024-12-08 20:02:00.408452] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:28.443 [2024-12-08 20:02:00.408651] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:28.443 [2024-12-08 20:02:00.408662] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:06:28.443 [2024-12-08 20:02:00.408808] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:28.443 20:02:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:28.443 20:02:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:06:28.443 20:02:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:28.443 20:02:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:06:28.443 20:02:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:28.702 20:02:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:28.702 20:02:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:06:28.702 20:02:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:06:28.702 20:02:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:06:28.702 20:02:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:28.702 20:02:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:28.702 20:02:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:28.702 20:02:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:06:28.702 20:02:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:28.702 20:02:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:28.702 20:02:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:28.702 20:02:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # jq '.[].num_blocks' 00:06:28.702 20:02:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:28.702 20:02:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:28.702 [2024-12-08 20:02:00.500023] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:28.702 20:02:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:28.702 20:02:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:28.702 20:02:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:28.702 20:02:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # (( 122880 == 122880 )) 00:06:28.702 20:02:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:06:28.702 20:02:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:28.702 20:02:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:28.702 [2024-12-08 20:02:00.543875] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:28.702 [2024-12-08 20:02:00.543901] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '0e31af02-517c-47d2-9192-77d354599132' was resized: old size 131072, new size 204800 00:06:28.702 20:02:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:28.702 20:02:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:06:28.702 20:02:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:28.702 20:02:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:28.702 [2024-12-08 20:02:00.551798] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:28.702 [2024-12-08 20:02:00.551822] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'c866fbec-81e0-4eab-9d66-b4680ff225c2' was resized: old size 131072, new size 204800 00:06:28.702 [2024-12-08 20:02:00.551854] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 122880 to 196608 00:06:28.702 20:02:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:28.702 20:02:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:06:28.702 20:02:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:06:28.702 20:02:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:28.702 20:02:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:28.702 20:02:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:28.702 20:02:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:06:28.702 20:02:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:06:28.702 20:02:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:06:28.702 20:02:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:28.702 20:02:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:28.702 20:02:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:28.702 20:02:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:06:28.702 20:02:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:28.702 20:02:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # jq '.[].num_blocks' 00:06:28.702 20:02:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:28.702 20:02:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:28.702 20:02:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:28.702 20:02:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:28.702 [2024-12-08 20:02:00.659722] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:28.962 20:02:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:28.962 20:02:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:28.962 20:02:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:28.962 20:02:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # (( 196608 == 196608 )) 00:06:28.962 20:02:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:06:28.962 20:02:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:28.962 20:02:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:28.962 [2024-12-08 20:02:00.703473] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:06:28.962 [2024-12-08 20:02:00.703550] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:06:28.962 [2024-12-08 20:02:00.703585] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:06:28.962 [2024-12-08 20:02:00.703765] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:28.962 [2024-12-08 20:02:00.704005] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:28.962 [2024-12-08 20:02:00.704074] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:28.962 [2024-12-08 20:02:00.704087] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:06:28.962 20:02:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:28.962 20:02:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:06:28.962 20:02:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:28.962 20:02:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:28.962 [2024-12-08 20:02:00.711393] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:28.962 [2024-12-08 20:02:00.711444] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:28.962 [2024-12-08 20:02:00.711469] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:06:28.962 [2024-12-08 20:02:00.711488] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:28.962 [2024-12-08 20:02:00.713648] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:28.962 [2024-12-08 20:02:00.713689] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:28.962 pt0 00:06:28.962 [2024-12-08 20:02:00.715354] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 0e31af02-517c-47d2-9192-77d354599132 00:06:28.962 [2024-12-08 20:02:00.715428] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 0e31af02-517c-47d2-9192-77d354599132 is claimed 00:06:28.962 [2024-12-08 20:02:00.715533] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev c866fbec-81e0-4eab-9d66-b4680ff225c2 00:06:28.962 [2024-12-08 20:02:00.715551] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev c866fbec-81e0-4eab-9d66-b4680ff225c2 is claimed 00:06:28.962 [2024-12-08 20:02:00.715700] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev c866fbec-81e0-4eab-9d66-b4680ff225c2 (2) smaller than existing raid bdev Raid (3) 00:06:28.962 [2024-12-08 20:02:00.715723] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 0e31af02-517c-47d2-9192-77d354599132: File exists 00:06:28.962 [2024-12-08 20:02:00.715762] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:06:28.962 [2024-12-08 20:02:00.715778] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:06:28.962 [2024-12-08 20:02:00.716077] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:06:28.962 20:02:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:28.962 20:02:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:06:28.962 [2024-12-08 20:02:00.716329] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:06:28.962 [2024-12-08 20:02:00.716344] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:06:28.962 20:02:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:28.962 20:02:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:28.962 [2024-12-08 20:02:00.716530] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:28.962 20:02:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:28.962 20:02:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:28.962 20:02:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # jq '.[].num_blocks' 00:06:28.962 20:02:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:28.962 20:02:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:28.962 20:02:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:28.962 20:02:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:28.962 [2024-12-08 20:02:00.731662] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:28.962 20:02:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:28.962 20:02:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:28.962 20:02:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:28.962 20:02:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # (( 196608 == 196608 )) 00:06:28.962 20:02:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 60080 00:06:28.962 20:02:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 60080 ']' 00:06:28.962 20:02:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@958 -- # kill -0 60080 00:06:28.962 20:02:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@959 -- # uname 00:06:28.962 20:02:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:28.962 20:02:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60080 00:06:28.962 killing process with pid 60080 00:06:28.962 20:02:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:28.962 20:02:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:28.962 20:02:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60080' 00:06:28.962 20:02:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@973 -- # kill 60080 00:06:28.962 20:02:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@978 -- # wait 60080 00:06:28.962 [2024-12-08 20:02:00.815762] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:28.962 [2024-12-08 20:02:00.815837] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:28.962 [2024-12-08 20:02:00.815911] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:28.962 [2024-12-08 20:02:00.815921] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:06:30.341 [2024-12-08 20:02:02.209486] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:31.722 20:02:03 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:06:31.722 00:06:31.722 real 0m4.436s 00:06:31.722 user 0m4.598s 00:06:31.722 sys 0m0.561s 00:06:31.722 20:02:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:31.722 ************************************ 00:06:31.722 END TEST raid1_resize_superblock_test 00:06:31.722 ************************************ 00:06:31.722 20:02:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:31.722 20:02:03 bdev_raid -- bdev/bdev_raid.sh@956 -- # uname -s 00:06:31.722 20:02:03 bdev_raid -- bdev/bdev_raid.sh@956 -- # '[' Linux = Linux ']' 00:06:31.722 20:02:03 bdev_raid -- bdev/bdev_raid.sh@956 -- # modprobe -n nbd 00:06:31.722 20:02:03 bdev_raid -- bdev/bdev_raid.sh@957 -- # has_nbd=true 00:06:31.722 20:02:03 bdev_raid -- bdev/bdev_raid.sh@958 -- # modprobe nbd 00:06:31.722 20:02:03 bdev_raid -- bdev/bdev_raid.sh@959 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:06:31.722 20:02:03 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:31.722 20:02:03 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:31.722 20:02:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:31.722 ************************************ 00:06:31.722 START TEST raid_function_test_raid0 00:06:31.722 ************************************ 00:06:31.722 20:02:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1129 -- # raid_function_test raid0 00:06:31.722 20:02:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@64 -- # local raid_level=raid0 00:06:31.722 20:02:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:06:31.722 20:02:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:06:31.722 20:02:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@69 -- # raid_pid=60181 00:06:31.722 Process raid pid: 60181 00:06:31.722 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:31.722 20:02:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60181' 00:06:31.722 20:02:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:31.722 20:02:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@71 -- # waitforlisten 60181 00:06:31.722 20:02:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@835 -- # '[' -z 60181 ']' 00:06:31.722 20:02:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:31.722 20:02:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:31.722 20:02:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:31.722 20:02:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:31.722 20:02:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:31.722 [2024-12-08 20:02:03.490560] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:06:31.722 [2024-12-08 20:02:03.490821] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:31.722 [2024-12-08 20:02:03.673752] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.982 [2024-12-08 20:02:03.787253] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.242 [2024-12-08 20:02:03.986123] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:32.242 [2024-12-08 20:02:03.986161] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:32.501 20:02:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:32.501 20:02:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@868 -- # return 0 00:06:32.501 20:02:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:06:32.502 20:02:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:32.502 20:02:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:32.502 Base_1 00:06:32.502 20:02:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:32.502 20:02:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:06:32.502 20:02:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:32.502 20:02:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:32.502 Base_2 00:06:32.502 20:02:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:32.502 20:02:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''Base_1 Base_2'\''' -n raid 00:06:32.502 20:02:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:32.502 20:02:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:32.502 [2024-12-08 20:02:04.378139] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:06:32.502 [2024-12-08 20:02:04.379931] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:06:32.502 [2024-12-08 20:02:04.380018] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:32.502 [2024-12-08 20:02:04.380032] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:06:32.502 [2024-12-08 20:02:04.380285] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:32.502 [2024-12-08 20:02:04.380443] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:32.502 [2024-12-08 20:02:04.380452] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:06:32.502 [2024-12-08 20:02:04.380605] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:32.502 20:02:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:32.502 20:02:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:06:32.502 20:02:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:32.502 20:02:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:32.502 20:02:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:06:32.502 20:02:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:32.502 20:02:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:06:32.502 20:02:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:06:32.502 20:02:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:06:32.502 20:02:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:06:32.502 20:02:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:06:32.502 20:02:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:32.502 20:02:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:06:32.502 20:02:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:32.502 20:02:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@12 -- # local i 00:06:32.502 20:02:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:32.502 20:02:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:06:32.502 20:02:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:06:32.762 [2024-12-08 20:02:04.617794] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:06:32.762 /dev/nbd0 00:06:32.762 20:02:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:32.762 20:02:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:32.762 20:02:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:32.762 20:02:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@873 -- # local i 00:06:32.762 20:02:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:32.762 20:02:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:32.762 20:02:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:32.762 20:02:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@877 -- # break 00:06:32.762 20:02:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:32.762 20:02:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:32.762 20:02:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:32.762 1+0 records in 00:06:32.762 1+0 records out 00:06:32.762 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000277708 s, 14.7 MB/s 00:06:32.762 20:02:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:32.762 20:02:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # size=4096 00:06:32.762 20:02:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:32.762 20:02:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:32.762 20:02:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@893 -- # return 0 00:06:32.762 20:02:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:32.762 20:02:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:06:32.762 20:02:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:06:32.762 20:02:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:06:32.762 20:02:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:06:33.032 20:02:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:33.032 { 00:06:33.032 "nbd_device": "/dev/nbd0", 00:06:33.032 "bdev_name": "raid" 00:06:33.032 } 00:06:33.032 ]' 00:06:33.032 20:02:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:33.032 { 00:06:33.032 "nbd_device": "/dev/nbd0", 00:06:33.032 "bdev_name": "raid" 00:06:33.032 } 00:06:33.032 ]' 00:06:33.032 20:02:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:33.032 20:02:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:06:33.032 20:02:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:06:33.032 20:02:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:33.032 20:02:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=1 00:06:33.032 20:02:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 1 00:06:33.032 20:02:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # count=1 00:06:33.032 20:02:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:06:33.032 20:02:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:06:33.032 20:02:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:06:33.032 20:02:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:06:33.032 20:02:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@19 -- # local blksize 00:06:33.032 20:02:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:06:33.032 20:02:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:06:33.032 20:02:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:06:33.032 20:02:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # blksize=512 00:06:33.032 20:02:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:06:33.032 20:02:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:06:33.032 20:02:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:06:33.032 20:02:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:06:33.032 20:02:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:06:33.032 20:02:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:06:33.032 20:02:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:06:33.032 20:02:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:06:33.032 20:02:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:06:33.313 4096+0 records in 00:06:33.313 4096+0 records out 00:06:33.313 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0350187 s, 59.9 MB/s 00:06:33.313 20:02:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:06:33.313 4096+0 records in 00:06:33.313 4096+0 records out 00:06:33.313 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.19391 s, 10.8 MB/s 00:06:33.313 20:02:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:06:33.313 20:02:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:33.313 20:02:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:06:33.313 20:02:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:33.313 20:02:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:06:33.313 20:02:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:06:33.313 20:02:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:06:33.313 128+0 records in 00:06:33.313 128+0 records out 00:06:33.313 65536 bytes (66 kB, 64 KiB) copied, 0.00148867 s, 44.0 MB/s 00:06:33.313 20:02:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:06:33.313 20:02:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:33.313 20:02:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:33.313 20:02:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:33.313 20:02:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:33.313 20:02:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:06:33.313 20:02:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:06:33.313 20:02:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:06:33.313 2035+0 records in 00:06:33.313 2035+0 records out 00:06:33.313 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0174203 s, 59.8 MB/s 00:06:33.313 20:02:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:06:33.573 20:02:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:33.573 20:02:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:33.573 20:02:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:33.573 20:02:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:33.573 20:02:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:06:33.573 20:02:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:06:33.573 20:02:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:06:33.573 456+0 records in 00:06:33.573 456+0 records out 00:06:33.573 233472 bytes (233 kB, 228 KiB) copied, 0.00410294 s, 56.9 MB/s 00:06:33.573 20:02:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:06:33.573 20:02:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:33.573 20:02:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:33.573 20:02:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:33.573 20:02:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:33.573 20:02:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@52 -- # return 0 00:06:33.573 20:02:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:06:33.573 20:02:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:06:33.573 20:02:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:06:33.573 20:02:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:33.573 20:02:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@51 -- # local i 00:06:33.573 20:02:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:33.573 20:02:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:06:33.832 20:02:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:33.832 [2024-12-08 20:02:05.563513] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:33.832 20:02:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:33.833 20:02:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:33.833 20:02:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:33.833 20:02:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:33.833 20:02:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:33.833 20:02:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@41 -- # break 00:06:33.833 20:02:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@45 -- # return 0 00:06:33.833 20:02:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:06:33.833 20:02:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:06:33.833 20:02:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:06:33.833 20:02:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:33.833 20:02:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:33.833 20:02:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:34.092 20:02:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:34.092 20:02:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:34.092 20:02:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:34.092 20:02:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # true 00:06:34.092 20:02:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=0 00:06:34.092 20:02:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:34.092 20:02:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # count=0 00:06:34.092 20:02:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:06:34.092 20:02:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # killprocess 60181 00:06:34.092 20:02:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@954 -- # '[' -z 60181 ']' 00:06:34.092 20:02:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@958 -- # kill -0 60181 00:06:34.092 20:02:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@959 -- # uname 00:06:34.092 20:02:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:34.092 20:02:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60181 00:06:34.092 killing process with pid 60181 00:06:34.092 20:02:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:34.092 20:02:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:34.092 20:02:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60181' 00:06:34.092 20:02:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@973 -- # kill 60181 00:06:34.092 [2024-12-08 20:02:05.879399] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:34.092 [2024-12-08 20:02:05.879506] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:34.092 20:02:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@978 -- # wait 60181 00:06:34.092 [2024-12-08 20:02:05.879554] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:34.092 [2024-12-08 20:02:05.879570] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:06:34.352 [2024-12-08 20:02:06.080167] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:35.292 20:02:07 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@99 -- # return 0 00:06:35.292 00:06:35.292 real 0m3.792s 00:06:35.292 user 0m4.347s 00:06:35.292 sys 0m0.996s 00:06:35.292 ************************************ 00:06:35.292 END TEST raid_function_test_raid0 00:06:35.292 ************************************ 00:06:35.292 20:02:07 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:35.292 20:02:07 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:35.292 20:02:07 bdev_raid -- bdev/bdev_raid.sh@960 -- # run_test raid_function_test_concat raid_function_test concat 00:06:35.292 20:02:07 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:35.292 20:02:07 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:35.292 20:02:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:35.292 ************************************ 00:06:35.292 START TEST raid_function_test_concat 00:06:35.292 ************************************ 00:06:35.292 20:02:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1129 -- # raid_function_test concat 00:06:35.292 20:02:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@64 -- # local raid_level=concat 00:06:35.292 20:02:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:06:35.292 20:02:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:06:35.292 Process raid pid: 60306 00:06:35.292 20:02:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@69 -- # raid_pid=60306 00:06:35.292 20:02:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60306' 00:06:35.292 20:02:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:35.292 20:02:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@71 -- # waitforlisten 60306 00:06:35.292 20:02:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@835 -- # '[' -z 60306 ']' 00:06:35.292 20:02:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:35.292 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:35.292 20:02:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:35.292 20:02:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:35.292 20:02:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:35.292 20:02:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:35.552 [2024-12-08 20:02:07.340251] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:06:35.552 [2024-12-08 20:02:07.340401] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:35.552 [2024-12-08 20:02:07.514426] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.812 [2024-12-08 20:02:07.627460] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.072 [2024-12-08 20:02:07.828667] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:36.072 [2024-12-08 20:02:07.828779] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:36.331 20:02:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:36.331 20:02:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@868 -- # return 0 00:06:36.331 20:02:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:06:36.331 20:02:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:36.331 20:02:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:36.331 Base_1 00:06:36.331 20:02:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:36.331 20:02:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:06:36.331 20:02:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:36.331 20:02:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:36.331 Base_2 00:06:36.331 20:02:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:36.331 20:02:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''Base_1 Base_2'\''' -n raid 00:06:36.331 20:02:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:36.331 20:02:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:36.331 [2024-12-08 20:02:08.237275] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:06:36.331 [2024-12-08 20:02:08.239080] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:06:36.331 [2024-12-08 20:02:08.239210] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:36.331 [2024-12-08 20:02:08.239287] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:06:36.331 [2024-12-08 20:02:08.239591] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:36.331 [2024-12-08 20:02:08.239800] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:36.331 [2024-12-08 20:02:08.239843] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:06:36.331 [2024-12-08 20:02:08.240069] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:36.331 20:02:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:36.331 20:02:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:06:36.331 20:02:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:06:36.331 20:02:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:36.331 20:02:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:36.331 20:02:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:36.331 20:02:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:06:36.331 20:02:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:06:36.331 20:02:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:06:36.331 20:02:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:06:36.331 20:02:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:06:36.331 20:02:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:36.331 20:02:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:06:36.332 20:02:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:36.332 20:02:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@12 -- # local i 00:06:36.332 20:02:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:36.332 20:02:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:06:36.332 20:02:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:06:36.591 [2024-12-08 20:02:08.468977] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:06:36.591 /dev/nbd0 00:06:36.591 20:02:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:36.591 20:02:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:36.591 20:02:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:36.592 20:02:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@873 -- # local i 00:06:36.592 20:02:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:36.592 20:02:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:36.592 20:02:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:36.592 20:02:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@877 -- # break 00:06:36.592 20:02:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:36.592 20:02:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:36.592 20:02:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:36.592 1+0 records in 00:06:36.592 1+0 records out 00:06:36.592 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000482062 s, 8.5 MB/s 00:06:36.592 20:02:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:36.592 20:02:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # size=4096 00:06:36.592 20:02:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:36.592 20:02:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:36.592 20:02:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@893 -- # return 0 00:06:36.592 20:02:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:36.592 20:02:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:06:36.592 20:02:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:06:36.592 20:02:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:06:36.592 20:02:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:06:36.852 20:02:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:36.852 { 00:06:36.852 "nbd_device": "/dev/nbd0", 00:06:36.852 "bdev_name": "raid" 00:06:36.852 } 00:06:36.852 ]' 00:06:36.852 20:02:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:36.852 { 00:06:36.852 "nbd_device": "/dev/nbd0", 00:06:36.852 "bdev_name": "raid" 00:06:36.852 } 00:06:36.852 ]' 00:06:36.852 20:02:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:36.852 20:02:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:06:36.852 20:02:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:06:36.852 20:02:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:36.852 20:02:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=1 00:06:36.852 20:02:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 1 00:06:36.852 20:02:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # count=1 00:06:36.852 20:02:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:06:36.852 20:02:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:06:36.852 20:02:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:06:36.852 20:02:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:06:36.852 20:02:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@19 -- # local blksize 00:06:36.852 20:02:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:06:36.852 20:02:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:06:36.852 20:02:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:06:37.112 20:02:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # blksize=512 00:06:37.112 20:02:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:06:37.112 20:02:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:06:37.112 20:02:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:06:37.112 20:02:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:06:37.112 20:02:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:06:37.112 20:02:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:06:37.112 20:02:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:06:37.112 20:02:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:06:37.112 20:02:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:06:37.112 4096+0 records in 00:06:37.112 4096+0 records out 00:06:37.112 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0230015 s, 91.2 MB/s 00:06:37.112 20:02:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:06:37.112 4096+0 records in 00:06:37.112 4096+0 records out 00:06:37.112 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.185022 s, 11.3 MB/s 00:06:37.112 20:02:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:06:37.112 20:02:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:37.112 20:02:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:06:37.112 20:02:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:37.112 20:02:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:06:37.112 20:02:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:06:37.112 20:02:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:06:37.112 128+0 records in 00:06:37.112 128+0 records out 00:06:37.112 65536 bytes (66 kB, 64 KiB) copied, 0.0015023 s, 43.6 MB/s 00:06:37.112 20:02:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:06:37.112 20:02:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:37.112 20:02:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:37.371 20:02:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:37.371 20:02:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:37.371 20:02:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:06:37.371 20:02:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:06:37.371 20:02:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:06:37.371 2035+0 records in 00:06:37.371 2035+0 records out 00:06:37.371 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0148592 s, 70.1 MB/s 00:06:37.371 20:02:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:06:37.371 20:02:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:37.371 20:02:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:37.371 20:02:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:37.371 20:02:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:37.371 20:02:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:06:37.371 20:02:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:06:37.371 20:02:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:06:37.371 456+0 records in 00:06:37.371 456+0 records out 00:06:37.371 233472 bytes (233 kB, 228 KiB) copied, 0.00437158 s, 53.4 MB/s 00:06:37.371 20:02:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:06:37.371 20:02:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:37.371 20:02:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:37.371 20:02:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:37.371 20:02:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:37.371 20:02:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@52 -- # return 0 00:06:37.371 20:02:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:06:37.371 20:02:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:06:37.371 20:02:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:06:37.371 20:02:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:37.371 20:02:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@51 -- # local i 00:06:37.371 20:02:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:37.371 20:02:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:06:37.631 20:02:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:37.631 [2024-12-08 20:02:09.411218] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:37.631 20:02:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:37.631 20:02:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:37.631 20:02:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:37.631 20:02:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:37.631 20:02:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:37.631 20:02:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@41 -- # break 00:06:37.631 20:02:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@45 -- # return 0 00:06:37.631 20:02:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:06:37.631 20:02:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:06:37.631 20:02:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:06:37.891 20:02:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:37.891 20:02:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:37.891 20:02:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:37.891 20:02:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:37.891 20:02:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:37.891 20:02:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:37.891 20:02:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # true 00:06:37.891 20:02:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=0 00:06:37.891 20:02:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:37.891 20:02:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # count=0 00:06:37.891 20:02:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:06:37.891 20:02:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # killprocess 60306 00:06:37.891 20:02:09 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@954 -- # '[' -z 60306 ']' 00:06:37.891 20:02:09 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@958 -- # kill -0 60306 00:06:37.891 20:02:09 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@959 -- # uname 00:06:37.891 20:02:09 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:37.891 20:02:09 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60306 00:06:37.891 killing process with pid 60306 00:06:37.891 20:02:09 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:37.891 20:02:09 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:37.891 20:02:09 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60306' 00:06:37.891 20:02:09 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@973 -- # kill 60306 00:06:37.891 [2024-12-08 20:02:09.716735] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:37.891 [2024-12-08 20:02:09.716862] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:37.891 20:02:09 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@978 -- # wait 60306 00:06:37.891 [2024-12-08 20:02:09.716921] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:37.891 [2024-12-08 20:02:09.716934] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:06:38.151 [2024-12-08 20:02:09.923884] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:39.089 ************************************ 00:06:39.089 END TEST raid_function_test_concat 00:06:39.089 ************************************ 00:06:39.089 20:02:10 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@99 -- # return 0 00:06:39.089 00:06:39.089 real 0m3.754s 00:06:39.089 user 0m4.293s 00:06:39.089 sys 0m0.999s 00:06:39.089 20:02:10 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:39.089 20:02:10 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:39.089 20:02:11 bdev_raid -- bdev/bdev_raid.sh@963 -- # run_test raid0_resize_test raid_resize_test 0 00:06:39.089 20:02:11 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:39.089 20:02:11 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:39.089 20:02:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:39.089 ************************************ 00:06:39.089 START TEST raid0_resize_test 00:06:39.089 ************************************ 00:06:39.089 20:02:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1129 -- # raid_resize_test 0 00:06:39.089 20:02:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=0 00:06:39.089 20:02:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:06:39.089 20:02:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:06:39.089 20:02:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:06:39.089 20:02:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:06:39.089 20:02:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:06:39.348 20:02:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:06:39.348 20:02:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:06:39.348 20:02:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60434 00:06:39.348 20:02:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:39.348 Process raid pid: 60434 00:06:39.348 20:02:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60434' 00:06:39.349 20:02:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60434 00:06:39.349 20:02:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@835 -- # '[' -z 60434 ']' 00:06:39.349 20:02:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:39.349 20:02:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:39.349 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:39.349 20:02:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:39.349 20:02:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:39.349 20:02:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:39.349 [2024-12-08 20:02:11.151306] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:06:39.349 [2024-12-08 20:02:11.151430] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:39.611 [2024-12-08 20:02:11.325962] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.611 [2024-12-08 20:02:11.436218] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.870 [2024-12-08 20:02:11.632447] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:39.870 [2024-12-08 20:02:11.632494] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:40.128 20:02:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:40.128 20:02:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@868 -- # return 0 00:06:40.128 20:02:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:06:40.128 20:02:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:40.128 20:02:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:40.128 Base_1 00:06:40.128 20:02:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:40.128 20:02:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:06:40.128 20:02:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:40.128 20:02:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:40.128 Base_2 00:06:40.128 20:02:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:40.128 20:02:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 0 -eq 0 ']' 00:06:40.129 20:02:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # rpc_cmd bdev_raid_create -z 64 -r 0 -b ''\''Base_1 Base_2'\''' -n Raid 00:06:40.129 20:02:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:40.129 20:02:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:40.129 [2024-12-08 20:02:12.000106] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:06:40.129 [2024-12-08 20:02:12.001862] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:06:40.129 [2024-12-08 20:02:12.001923] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:40.129 [2024-12-08 20:02:12.001934] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:06:40.129 [2024-12-08 20:02:12.002187] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:06:40.129 [2024-12-08 20:02:12.002306] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:40.129 [2024-12-08 20:02:12.002314] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:06:40.129 [2024-12-08 20:02:12.002452] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:40.129 20:02:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:40.129 20:02:12 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:06:40.129 20:02:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:40.129 20:02:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:40.129 [2024-12-08 20:02:12.012071] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:40.129 [2024-12-08 20:02:12.012140] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:06:40.129 true 00:06:40.129 20:02:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:40.129 20:02:12 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:40.129 20:02:12 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:06:40.129 20:02:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:40.129 20:02:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:40.129 [2024-12-08 20:02:12.028258] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:40.129 20:02:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:40.129 20:02:12 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=131072 00:06:40.129 20:02:12 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=64 00:06:40.129 20:02:12 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 0 -eq 0 ']' 00:06:40.129 20:02:12 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # expected_size=64 00:06:40.129 20:02:12 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 64 '!=' 64 ']' 00:06:40.129 20:02:12 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:06:40.129 20:02:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:40.129 20:02:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:40.129 [2024-12-08 20:02:12.071996] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:40.129 [2024-12-08 20:02:12.072065] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:06:40.129 [2024-12-08 20:02:12.072127] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:06:40.129 true 00:06:40.129 20:02:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:40.129 20:02:12 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:40.129 20:02:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:40.129 20:02:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:40.129 20:02:12 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:06:40.129 [2024-12-08 20:02:12.084145] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:40.129 20:02:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:40.388 20:02:12 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=262144 00:06:40.388 20:02:12 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=128 00:06:40.388 20:02:12 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 0 -eq 0 ']' 00:06:40.388 20:02:12 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@378 -- # expected_size=128 00:06:40.388 20:02:12 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 128 '!=' 128 ']' 00:06:40.388 20:02:12 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60434 00:06:40.388 20:02:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@954 -- # '[' -z 60434 ']' 00:06:40.388 20:02:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@958 -- # kill -0 60434 00:06:40.388 20:02:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # uname 00:06:40.388 20:02:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:40.388 20:02:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60434 00:06:40.388 20:02:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:40.388 20:02:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:40.388 20:02:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60434' 00:06:40.388 killing process with pid 60434 00:06:40.388 20:02:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@973 -- # kill 60434 00:06:40.388 [2024-12-08 20:02:12.161997] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:40.388 [2024-12-08 20:02:12.162144] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:40.388 20:02:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@978 -- # wait 60434 00:06:40.388 [2024-12-08 20:02:12.162235] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:40.388 [2024-12-08 20:02:12.162247] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:06:40.388 [2024-12-08 20:02:12.179165] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:41.325 20:02:13 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:06:41.325 00:06:41.325 real 0m2.199s 00:06:41.325 user 0m2.306s 00:06:41.325 sys 0m0.343s 00:06:41.325 20:02:13 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:41.325 20:02:13 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:41.325 ************************************ 00:06:41.325 END TEST raid0_resize_test 00:06:41.325 ************************************ 00:06:41.585 20:02:13 bdev_raid -- bdev/bdev_raid.sh@964 -- # run_test raid1_resize_test raid_resize_test 1 00:06:41.585 20:02:13 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:41.585 20:02:13 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:41.585 20:02:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:41.585 ************************************ 00:06:41.585 START TEST raid1_resize_test 00:06:41.585 ************************************ 00:06:41.585 20:02:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1129 -- # raid_resize_test 1 00:06:41.585 20:02:13 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=1 00:06:41.585 20:02:13 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:06:41.585 20:02:13 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:06:41.585 20:02:13 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:06:41.585 20:02:13 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:06:41.585 20:02:13 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:06:41.585 20:02:13 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:06:41.585 20:02:13 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:06:41.585 20:02:13 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60490 00:06:41.585 20:02:13 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:41.585 20:02:13 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60490' 00:06:41.585 Process raid pid: 60490 00:06:41.585 20:02:13 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60490 00:06:41.585 20:02:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@835 -- # '[' -z 60490 ']' 00:06:41.585 20:02:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:41.585 20:02:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:41.585 20:02:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:41.585 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:41.585 20:02:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:41.585 20:02:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:41.585 [2024-12-08 20:02:13.409914] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:06:41.585 [2024-12-08 20:02:13.410150] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:41.844 [2024-12-08 20:02:13.581755] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.844 [2024-12-08 20:02:13.693455] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.102 [2024-12-08 20:02:13.891700] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:42.102 [2024-12-08 20:02:13.891834] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:42.360 20:02:14 bdev_raid.raid1_resize_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:42.360 20:02:14 bdev_raid.raid1_resize_test -- common/autotest_common.sh@868 -- # return 0 00:06:42.360 20:02:14 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:06:42.360 20:02:14 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:42.360 20:02:14 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:42.360 Base_1 00:06:42.360 20:02:14 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:42.360 20:02:14 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:06:42.360 20:02:14 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:42.360 20:02:14 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:42.360 Base_2 00:06:42.360 20:02:14 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:42.360 20:02:14 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 1 -eq 0 ']' 00:06:42.360 20:02:14 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@352 -- # rpc_cmd bdev_raid_create -r 1 -b ''\''Base_1 Base_2'\''' -n Raid 00:06:42.360 20:02:14 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:42.360 20:02:14 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:42.360 [2024-12-08 20:02:14.310357] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:06:42.360 [2024-12-08 20:02:14.312188] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:06:42.360 [2024-12-08 20:02:14.312255] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:42.360 [2024-12-08 20:02:14.312268] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:06:42.360 [2024-12-08 20:02:14.312499] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:06:42.360 [2024-12-08 20:02:14.312617] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:42.360 [2024-12-08 20:02:14.312625] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:06:42.360 [2024-12-08 20:02:14.312770] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:42.360 20:02:14 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:42.360 20:02:14 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:06:42.360 20:02:14 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:42.360 20:02:14 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:42.360 [2024-12-08 20:02:14.322351] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:42.360 [2024-12-08 20:02:14.322421] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:06:42.360 true 00:06:42.360 20:02:14 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:42.360 20:02:14 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:42.360 20:02:14 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:06:42.360 20:02:14 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:42.360 20:02:14 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:42.360 [2024-12-08 20:02:14.334456] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:42.619 20:02:14 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:42.619 20:02:14 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=65536 00:06:42.619 20:02:14 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=32 00:06:42.619 20:02:14 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 1 -eq 0 ']' 00:06:42.619 20:02:14 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@364 -- # expected_size=32 00:06:42.619 20:02:14 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 32 '!=' 32 ']' 00:06:42.619 20:02:14 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:06:42.619 20:02:14 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:42.619 20:02:14 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:42.619 [2024-12-08 20:02:14.374214] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:42.619 [2024-12-08 20:02:14.374236] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:06:42.619 [2024-12-08 20:02:14.374263] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 65536 to 131072 00:06:42.619 true 00:06:42.619 20:02:14 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:42.619 20:02:14 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:06:42.619 20:02:14 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:42.619 20:02:14 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:42.619 20:02:14 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:42.619 [2024-12-08 20:02:14.390341] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:42.619 20:02:14 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:42.619 20:02:14 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=131072 00:06:42.619 20:02:14 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=64 00:06:42.619 20:02:14 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 1 -eq 0 ']' 00:06:42.619 20:02:14 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@380 -- # expected_size=64 00:06:42.619 20:02:14 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 64 '!=' 64 ']' 00:06:42.619 20:02:14 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60490 00:06:42.619 20:02:14 bdev_raid.raid1_resize_test -- common/autotest_common.sh@954 -- # '[' -z 60490 ']' 00:06:42.619 20:02:14 bdev_raid.raid1_resize_test -- common/autotest_common.sh@958 -- # kill -0 60490 00:06:42.619 20:02:14 bdev_raid.raid1_resize_test -- common/autotest_common.sh@959 -- # uname 00:06:42.619 20:02:14 bdev_raid.raid1_resize_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:42.619 20:02:14 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60490 00:06:42.619 20:02:14 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:42.619 20:02:14 bdev_raid.raid1_resize_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:42.619 20:02:14 bdev_raid.raid1_resize_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60490' 00:06:42.619 killing process with pid 60490 00:06:42.619 20:02:14 bdev_raid.raid1_resize_test -- common/autotest_common.sh@973 -- # kill 60490 00:06:42.619 [2024-12-08 20:02:14.458508] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:42.619 [2024-12-08 20:02:14.458663] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:42.619 20:02:14 bdev_raid.raid1_resize_test -- common/autotest_common.sh@978 -- # wait 60490 00:06:42.619 [2024-12-08 20:02:14.459205] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:42.619 [2024-12-08 20:02:14.459289] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:06:42.619 [2024-12-08 20:02:14.475871] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:44.029 20:02:15 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:06:44.029 00:06:44.029 real 0m2.227s 00:06:44.029 user 0m2.371s 00:06:44.029 sys 0m0.342s 00:06:44.029 20:02:15 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:44.029 20:02:15 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:44.029 ************************************ 00:06:44.029 END TEST raid1_resize_test 00:06:44.029 ************************************ 00:06:44.029 20:02:15 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:06:44.029 20:02:15 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:06:44.029 20:02:15 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:06:44.029 20:02:15 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:06:44.029 20:02:15 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:44.029 20:02:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:44.029 ************************************ 00:06:44.029 START TEST raid_state_function_test 00:06:44.029 ************************************ 00:06:44.029 20:02:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 2 false 00:06:44.029 20:02:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:06:44.029 20:02:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:06:44.029 20:02:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:06:44.029 20:02:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:06:44.029 20:02:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:06:44.029 20:02:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:44.029 20:02:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:06:44.029 20:02:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:06:44.029 20:02:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:44.029 20:02:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:06:44.029 20:02:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:06:44.029 20:02:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:44.029 20:02:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:06:44.029 20:02:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:06:44.029 20:02:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:06:44.029 20:02:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:06:44.029 20:02:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:06:44.029 20:02:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:06:44.029 20:02:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:06:44.029 20:02:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:06:44.029 20:02:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:06:44.029 20:02:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:06:44.029 20:02:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:06:44.029 Process raid pid: 60547 00:06:44.029 20:02:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=60547 00:06:44.029 20:02:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 60547' 00:06:44.029 20:02:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:44.029 20:02:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 60547 00:06:44.029 20:02:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 60547 ']' 00:06:44.029 20:02:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:44.029 20:02:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:44.029 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:44.029 20:02:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:44.029 20:02:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:44.029 20:02:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:44.030 [2024-12-08 20:02:15.729515] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:06:44.030 [2024-12-08 20:02:15.729758] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:44.030 [2024-12-08 20:02:15.911249] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.289 [2024-12-08 20:02:16.019437] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.289 [2024-12-08 20:02:16.220558] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:44.289 [2024-12-08 20:02:16.220588] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:44.857 20:02:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:44.857 20:02:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:06:44.857 20:02:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:44.857 20:02:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:44.857 20:02:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:44.857 [2024-12-08 20:02:16.533462] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:44.857 [2024-12-08 20:02:16.533521] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:44.857 [2024-12-08 20:02:16.533532] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:44.857 [2024-12-08 20:02:16.533542] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:44.858 20:02:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:44.858 20:02:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:06:44.858 20:02:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:44.858 20:02:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:44.858 20:02:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:44.858 20:02:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:44.858 20:02:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:44.858 20:02:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:44.858 20:02:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:44.858 20:02:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:44.858 20:02:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:44.858 20:02:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:44.858 20:02:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:44.858 20:02:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:44.858 20:02:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:44.858 20:02:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:44.858 20:02:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:44.858 "name": "Existed_Raid", 00:06:44.858 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:44.858 "strip_size_kb": 64, 00:06:44.858 "state": "configuring", 00:06:44.858 "raid_level": "raid0", 00:06:44.858 "superblock": false, 00:06:44.858 "num_base_bdevs": 2, 00:06:44.858 "num_base_bdevs_discovered": 0, 00:06:44.858 "num_base_bdevs_operational": 2, 00:06:44.858 "base_bdevs_list": [ 00:06:44.858 { 00:06:44.858 "name": "BaseBdev1", 00:06:44.858 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:44.858 "is_configured": false, 00:06:44.858 "data_offset": 0, 00:06:44.858 "data_size": 0 00:06:44.858 }, 00:06:44.858 { 00:06:44.858 "name": "BaseBdev2", 00:06:44.858 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:44.858 "is_configured": false, 00:06:44.858 "data_offset": 0, 00:06:44.858 "data_size": 0 00:06:44.858 } 00:06:44.858 ] 00:06:44.858 }' 00:06:44.858 20:02:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:44.858 20:02:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:45.118 20:02:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:06:45.118 20:02:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:45.118 20:02:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:45.118 [2024-12-08 20:02:16.948814] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:06:45.118 [2024-12-08 20:02:16.948997] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:06:45.118 20:02:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:45.118 20:02:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:45.118 20:02:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:45.118 20:02:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:45.118 [2024-12-08 20:02:16.960708] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:45.118 [2024-12-08 20:02:16.960815] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:45.118 [2024-12-08 20:02:16.960849] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:45.118 [2024-12-08 20:02:16.960879] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:45.118 20:02:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:45.118 20:02:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:06:45.118 20:02:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:45.118 20:02:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:45.118 [2024-12-08 20:02:17.009179] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:45.118 BaseBdev1 00:06:45.118 20:02:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:45.118 20:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:06:45.118 20:02:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:06:45.118 20:02:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:06:45.118 20:02:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:06:45.118 20:02:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:06:45.118 20:02:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:06:45.118 20:02:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:06:45.118 20:02:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:45.118 20:02:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:45.118 20:02:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:45.118 20:02:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:06:45.118 20:02:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:45.118 20:02:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:45.118 [ 00:06:45.118 { 00:06:45.118 "name": "BaseBdev1", 00:06:45.118 "aliases": [ 00:06:45.118 "e0f3379a-ae0a-45a2-9437-84433428ba5e" 00:06:45.118 ], 00:06:45.118 "product_name": "Malloc disk", 00:06:45.118 "block_size": 512, 00:06:45.118 "num_blocks": 65536, 00:06:45.118 "uuid": "e0f3379a-ae0a-45a2-9437-84433428ba5e", 00:06:45.118 "assigned_rate_limits": { 00:06:45.118 "rw_ios_per_sec": 0, 00:06:45.118 "rw_mbytes_per_sec": 0, 00:06:45.118 "r_mbytes_per_sec": 0, 00:06:45.118 "w_mbytes_per_sec": 0 00:06:45.118 }, 00:06:45.118 "claimed": true, 00:06:45.118 "claim_type": "exclusive_write", 00:06:45.118 "zoned": false, 00:06:45.118 "supported_io_types": { 00:06:45.118 "read": true, 00:06:45.118 "write": true, 00:06:45.118 "unmap": true, 00:06:45.118 "flush": true, 00:06:45.118 "reset": true, 00:06:45.118 "nvme_admin": false, 00:06:45.118 "nvme_io": false, 00:06:45.118 "nvme_io_md": false, 00:06:45.118 "write_zeroes": true, 00:06:45.118 "zcopy": true, 00:06:45.118 "get_zone_info": false, 00:06:45.118 "zone_management": false, 00:06:45.118 "zone_append": false, 00:06:45.118 "compare": false, 00:06:45.118 "compare_and_write": false, 00:06:45.118 "abort": true, 00:06:45.118 "seek_hole": false, 00:06:45.118 "seek_data": false, 00:06:45.118 "copy": true, 00:06:45.118 "nvme_iov_md": false 00:06:45.118 }, 00:06:45.118 "memory_domains": [ 00:06:45.118 { 00:06:45.118 "dma_device_id": "system", 00:06:45.118 "dma_device_type": 1 00:06:45.118 }, 00:06:45.118 { 00:06:45.118 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:45.118 "dma_device_type": 2 00:06:45.118 } 00:06:45.118 ], 00:06:45.118 "driver_specific": {} 00:06:45.118 } 00:06:45.118 ] 00:06:45.118 20:02:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:45.118 20:02:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:06:45.118 20:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:06:45.118 20:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:45.118 20:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:45.118 20:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:45.118 20:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:45.118 20:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:45.118 20:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:45.118 20:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:45.118 20:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:45.118 20:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:45.118 20:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:45.119 20:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:45.119 20:02:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:45.119 20:02:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:45.119 20:02:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:45.378 20:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:45.378 "name": "Existed_Raid", 00:06:45.378 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:45.378 "strip_size_kb": 64, 00:06:45.378 "state": "configuring", 00:06:45.378 "raid_level": "raid0", 00:06:45.378 "superblock": false, 00:06:45.378 "num_base_bdevs": 2, 00:06:45.378 "num_base_bdevs_discovered": 1, 00:06:45.378 "num_base_bdevs_operational": 2, 00:06:45.378 "base_bdevs_list": [ 00:06:45.378 { 00:06:45.378 "name": "BaseBdev1", 00:06:45.378 "uuid": "e0f3379a-ae0a-45a2-9437-84433428ba5e", 00:06:45.378 "is_configured": true, 00:06:45.378 "data_offset": 0, 00:06:45.378 "data_size": 65536 00:06:45.378 }, 00:06:45.378 { 00:06:45.378 "name": "BaseBdev2", 00:06:45.378 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:45.378 "is_configured": false, 00:06:45.378 "data_offset": 0, 00:06:45.378 "data_size": 0 00:06:45.378 } 00:06:45.378 ] 00:06:45.378 }' 00:06:45.378 20:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:45.378 20:02:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:45.638 20:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:06:45.638 20:02:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:45.638 20:02:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:45.638 [2024-12-08 20:02:17.456620] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:06:45.638 [2024-12-08 20:02:17.456713] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:06:45.638 20:02:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:45.638 20:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:45.638 20:02:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:45.638 20:02:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:45.638 [2024-12-08 20:02:17.468632] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:45.638 [2024-12-08 20:02:17.470851] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:45.638 [2024-12-08 20:02:17.470902] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:45.638 20:02:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:45.638 20:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:06:45.638 20:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:06:45.638 20:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:06:45.638 20:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:45.638 20:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:45.638 20:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:45.638 20:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:45.638 20:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:45.638 20:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:45.638 20:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:45.638 20:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:45.638 20:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:45.638 20:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:45.638 20:02:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:45.638 20:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:45.638 20:02:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:45.638 20:02:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:45.638 20:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:45.638 "name": "Existed_Raid", 00:06:45.638 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:45.638 "strip_size_kb": 64, 00:06:45.638 "state": "configuring", 00:06:45.638 "raid_level": "raid0", 00:06:45.638 "superblock": false, 00:06:45.638 "num_base_bdevs": 2, 00:06:45.638 "num_base_bdevs_discovered": 1, 00:06:45.638 "num_base_bdevs_operational": 2, 00:06:45.638 "base_bdevs_list": [ 00:06:45.638 { 00:06:45.638 "name": "BaseBdev1", 00:06:45.638 "uuid": "e0f3379a-ae0a-45a2-9437-84433428ba5e", 00:06:45.638 "is_configured": true, 00:06:45.638 "data_offset": 0, 00:06:45.638 "data_size": 65536 00:06:45.638 }, 00:06:45.638 { 00:06:45.638 "name": "BaseBdev2", 00:06:45.638 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:45.638 "is_configured": false, 00:06:45.638 "data_offset": 0, 00:06:45.638 "data_size": 0 00:06:45.638 } 00:06:45.638 ] 00:06:45.638 }' 00:06:45.638 20:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:45.638 20:02:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:45.899 20:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:06:46.159 20:02:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:46.159 20:02:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:46.159 [2024-12-08 20:02:17.925700] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:06:46.159 [2024-12-08 20:02:17.925868] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:06:46.159 [2024-12-08 20:02:17.925886] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:06:46.159 [2024-12-08 20:02:17.926358] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:46.159 [2024-12-08 20:02:17.926584] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:06:46.159 [2024-12-08 20:02:17.926600] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:06:46.159 [2024-12-08 20:02:17.926910] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:46.159 BaseBdev2 00:06:46.159 20:02:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:46.159 20:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:06:46.159 20:02:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:06:46.159 20:02:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:06:46.159 20:02:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:06:46.159 20:02:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:06:46.159 20:02:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:06:46.159 20:02:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:06:46.159 20:02:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:46.159 20:02:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:46.159 20:02:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:46.159 20:02:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:06:46.159 20:02:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:46.159 20:02:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:46.159 [ 00:06:46.159 { 00:06:46.159 "name": "BaseBdev2", 00:06:46.159 "aliases": [ 00:06:46.159 "91ac7f67-c32d-452e-99f8-cb8531c95409" 00:06:46.159 ], 00:06:46.159 "product_name": "Malloc disk", 00:06:46.159 "block_size": 512, 00:06:46.159 "num_blocks": 65536, 00:06:46.159 "uuid": "91ac7f67-c32d-452e-99f8-cb8531c95409", 00:06:46.159 "assigned_rate_limits": { 00:06:46.159 "rw_ios_per_sec": 0, 00:06:46.159 "rw_mbytes_per_sec": 0, 00:06:46.159 "r_mbytes_per_sec": 0, 00:06:46.159 "w_mbytes_per_sec": 0 00:06:46.159 }, 00:06:46.159 "claimed": true, 00:06:46.159 "claim_type": "exclusive_write", 00:06:46.159 "zoned": false, 00:06:46.160 "supported_io_types": { 00:06:46.160 "read": true, 00:06:46.160 "write": true, 00:06:46.160 "unmap": true, 00:06:46.160 "flush": true, 00:06:46.160 "reset": true, 00:06:46.160 "nvme_admin": false, 00:06:46.160 "nvme_io": false, 00:06:46.160 "nvme_io_md": false, 00:06:46.160 "write_zeroes": true, 00:06:46.160 "zcopy": true, 00:06:46.160 "get_zone_info": false, 00:06:46.160 "zone_management": false, 00:06:46.160 "zone_append": false, 00:06:46.160 "compare": false, 00:06:46.160 "compare_and_write": false, 00:06:46.160 "abort": true, 00:06:46.160 "seek_hole": false, 00:06:46.160 "seek_data": false, 00:06:46.160 "copy": true, 00:06:46.160 "nvme_iov_md": false 00:06:46.160 }, 00:06:46.160 "memory_domains": [ 00:06:46.160 { 00:06:46.160 "dma_device_id": "system", 00:06:46.160 "dma_device_type": 1 00:06:46.160 }, 00:06:46.160 { 00:06:46.160 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:46.160 "dma_device_type": 2 00:06:46.160 } 00:06:46.160 ], 00:06:46.160 "driver_specific": {} 00:06:46.160 } 00:06:46.160 ] 00:06:46.160 20:02:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:46.160 20:02:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:06:46.160 20:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:06:46.160 20:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:06:46.160 20:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:06:46.160 20:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:46.160 20:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:46.160 20:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:46.160 20:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:46.160 20:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:46.160 20:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:46.160 20:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:46.160 20:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:46.160 20:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:46.160 20:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:46.160 20:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:46.160 20:02:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:46.160 20:02:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:46.160 20:02:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:46.160 20:02:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:46.160 "name": "Existed_Raid", 00:06:46.160 "uuid": "79a9e7cc-a81c-4b6a-be2c-99abdd762515", 00:06:46.160 "strip_size_kb": 64, 00:06:46.160 "state": "online", 00:06:46.160 "raid_level": "raid0", 00:06:46.160 "superblock": false, 00:06:46.160 "num_base_bdevs": 2, 00:06:46.160 "num_base_bdevs_discovered": 2, 00:06:46.160 "num_base_bdevs_operational": 2, 00:06:46.160 "base_bdevs_list": [ 00:06:46.160 { 00:06:46.160 "name": "BaseBdev1", 00:06:46.160 "uuid": "e0f3379a-ae0a-45a2-9437-84433428ba5e", 00:06:46.160 "is_configured": true, 00:06:46.160 "data_offset": 0, 00:06:46.160 "data_size": 65536 00:06:46.160 }, 00:06:46.160 { 00:06:46.160 "name": "BaseBdev2", 00:06:46.160 "uuid": "91ac7f67-c32d-452e-99f8-cb8531c95409", 00:06:46.160 "is_configured": true, 00:06:46.160 "data_offset": 0, 00:06:46.160 "data_size": 65536 00:06:46.160 } 00:06:46.160 ] 00:06:46.160 }' 00:06:46.160 20:02:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:46.160 20:02:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:46.420 20:02:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:06:46.420 20:02:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:06:46.420 20:02:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:06:46.420 20:02:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:06:46.420 20:02:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:06:46.420 20:02:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:06:46.420 20:02:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:06:46.420 20:02:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:46.420 20:02:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:46.420 20:02:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:06:46.420 [2024-12-08 20:02:18.393300] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:46.680 20:02:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:46.680 20:02:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:06:46.680 "name": "Existed_Raid", 00:06:46.680 "aliases": [ 00:06:46.680 "79a9e7cc-a81c-4b6a-be2c-99abdd762515" 00:06:46.680 ], 00:06:46.680 "product_name": "Raid Volume", 00:06:46.680 "block_size": 512, 00:06:46.680 "num_blocks": 131072, 00:06:46.680 "uuid": "79a9e7cc-a81c-4b6a-be2c-99abdd762515", 00:06:46.680 "assigned_rate_limits": { 00:06:46.680 "rw_ios_per_sec": 0, 00:06:46.680 "rw_mbytes_per_sec": 0, 00:06:46.680 "r_mbytes_per_sec": 0, 00:06:46.680 "w_mbytes_per_sec": 0 00:06:46.680 }, 00:06:46.680 "claimed": false, 00:06:46.680 "zoned": false, 00:06:46.680 "supported_io_types": { 00:06:46.680 "read": true, 00:06:46.680 "write": true, 00:06:46.680 "unmap": true, 00:06:46.680 "flush": true, 00:06:46.680 "reset": true, 00:06:46.680 "nvme_admin": false, 00:06:46.680 "nvme_io": false, 00:06:46.680 "nvme_io_md": false, 00:06:46.680 "write_zeroes": true, 00:06:46.680 "zcopy": false, 00:06:46.680 "get_zone_info": false, 00:06:46.680 "zone_management": false, 00:06:46.680 "zone_append": false, 00:06:46.680 "compare": false, 00:06:46.680 "compare_and_write": false, 00:06:46.680 "abort": false, 00:06:46.680 "seek_hole": false, 00:06:46.680 "seek_data": false, 00:06:46.680 "copy": false, 00:06:46.680 "nvme_iov_md": false 00:06:46.680 }, 00:06:46.680 "memory_domains": [ 00:06:46.680 { 00:06:46.680 "dma_device_id": "system", 00:06:46.680 "dma_device_type": 1 00:06:46.680 }, 00:06:46.680 { 00:06:46.680 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:46.680 "dma_device_type": 2 00:06:46.680 }, 00:06:46.680 { 00:06:46.680 "dma_device_id": "system", 00:06:46.680 "dma_device_type": 1 00:06:46.680 }, 00:06:46.680 { 00:06:46.680 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:46.680 "dma_device_type": 2 00:06:46.680 } 00:06:46.680 ], 00:06:46.680 "driver_specific": { 00:06:46.680 "raid": { 00:06:46.680 "uuid": "79a9e7cc-a81c-4b6a-be2c-99abdd762515", 00:06:46.680 "strip_size_kb": 64, 00:06:46.680 "state": "online", 00:06:46.680 "raid_level": "raid0", 00:06:46.680 "superblock": false, 00:06:46.680 "num_base_bdevs": 2, 00:06:46.680 "num_base_bdevs_discovered": 2, 00:06:46.680 "num_base_bdevs_operational": 2, 00:06:46.680 "base_bdevs_list": [ 00:06:46.680 { 00:06:46.680 "name": "BaseBdev1", 00:06:46.680 "uuid": "e0f3379a-ae0a-45a2-9437-84433428ba5e", 00:06:46.680 "is_configured": true, 00:06:46.680 "data_offset": 0, 00:06:46.681 "data_size": 65536 00:06:46.681 }, 00:06:46.681 { 00:06:46.681 "name": "BaseBdev2", 00:06:46.681 "uuid": "91ac7f67-c32d-452e-99f8-cb8531c95409", 00:06:46.681 "is_configured": true, 00:06:46.681 "data_offset": 0, 00:06:46.681 "data_size": 65536 00:06:46.681 } 00:06:46.681 ] 00:06:46.681 } 00:06:46.681 } 00:06:46.681 }' 00:06:46.681 20:02:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:06:46.681 20:02:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:06:46.681 BaseBdev2' 00:06:46.681 20:02:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:46.681 20:02:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:06:46.681 20:02:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:46.681 20:02:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:06:46.681 20:02:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:46.681 20:02:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:46.681 20:02:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:46.681 20:02:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:46.681 20:02:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:46.681 20:02:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:46.681 20:02:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:46.681 20:02:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:06:46.681 20:02:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:46.681 20:02:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:46.681 20:02:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:46.681 20:02:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:46.681 20:02:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:46.681 20:02:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:46.681 20:02:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:06:46.681 20:02:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:46.681 20:02:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:46.681 [2024-12-08 20:02:18.592758] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:06:46.681 [2024-12-08 20:02:18.592823] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:46.681 [2024-12-08 20:02:18.592893] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:46.941 20:02:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:46.941 20:02:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:06:46.941 20:02:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:06:46.941 20:02:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:06:46.941 20:02:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:06:46.941 20:02:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:06:46.941 20:02:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:06:46.941 20:02:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:46.941 20:02:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:06:46.941 20:02:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:46.941 20:02:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:46.941 20:02:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:06:46.941 20:02:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:46.941 20:02:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:46.941 20:02:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:46.941 20:02:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:46.941 20:02:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:46.941 20:02:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:46.941 20:02:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:46.941 20:02:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:46.941 20:02:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:46.942 20:02:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:46.942 "name": "Existed_Raid", 00:06:46.942 "uuid": "79a9e7cc-a81c-4b6a-be2c-99abdd762515", 00:06:46.942 "strip_size_kb": 64, 00:06:46.942 "state": "offline", 00:06:46.942 "raid_level": "raid0", 00:06:46.942 "superblock": false, 00:06:46.942 "num_base_bdevs": 2, 00:06:46.942 "num_base_bdevs_discovered": 1, 00:06:46.942 "num_base_bdevs_operational": 1, 00:06:46.942 "base_bdevs_list": [ 00:06:46.942 { 00:06:46.942 "name": null, 00:06:46.942 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:46.942 "is_configured": false, 00:06:46.942 "data_offset": 0, 00:06:46.942 "data_size": 65536 00:06:46.942 }, 00:06:46.942 { 00:06:46.942 "name": "BaseBdev2", 00:06:46.942 "uuid": "91ac7f67-c32d-452e-99f8-cb8531c95409", 00:06:46.942 "is_configured": true, 00:06:46.942 "data_offset": 0, 00:06:46.942 "data_size": 65536 00:06:46.942 } 00:06:46.942 ] 00:06:46.942 }' 00:06:46.942 20:02:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:46.942 20:02:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:47.202 20:02:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:06:47.202 20:02:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:06:47.202 20:02:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:06:47.202 20:02:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:47.202 20:02:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.202 20:02:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:47.202 20:02:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.202 20:02:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:06:47.202 20:02:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:06:47.202 20:02:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:06:47.202 20:02:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.202 20:02:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:47.202 [2024-12-08 20:02:19.151265] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:06:47.202 [2024-12-08 20:02:19.151437] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:06:47.462 20:02:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.462 20:02:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:06:47.462 20:02:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:06:47.462 20:02:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:47.462 20:02:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:06:47.462 20:02:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.462 20:02:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:47.462 20:02:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.462 20:02:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:06:47.462 20:02:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:06:47.462 20:02:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:06:47.462 20:02:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 60547 00:06:47.462 20:02:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 60547 ']' 00:06:47.462 20:02:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 60547 00:06:47.462 20:02:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:06:47.462 20:02:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:47.462 20:02:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60547 00:06:47.462 killing process with pid 60547 00:06:47.462 20:02:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:47.462 20:02:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:47.462 20:02:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60547' 00:06:47.462 20:02:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 60547 00:06:47.463 [2024-12-08 20:02:19.350383] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:47.463 20:02:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 60547 00:06:47.463 [2024-12-08 20:02:19.368460] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:48.844 20:02:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:06:48.844 00:06:48.844 real 0m4.960s 00:06:48.844 user 0m6.998s 00:06:48.844 sys 0m0.785s 00:06:48.844 20:02:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:48.844 20:02:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:48.844 ************************************ 00:06:48.844 END TEST raid_state_function_test 00:06:48.844 ************************************ 00:06:48.844 20:02:20 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:06:48.844 20:02:20 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:06:48.844 20:02:20 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:48.844 20:02:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:48.844 ************************************ 00:06:48.844 START TEST raid_state_function_test_sb 00:06:48.844 ************************************ 00:06:48.844 20:02:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 2 true 00:06:48.844 20:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:06:48.844 20:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:06:48.844 20:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:06:48.844 20:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:06:48.844 20:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:06:48.844 20:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:48.844 20:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:06:48.844 20:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:06:48.844 20:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:48.844 20:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:06:48.844 20:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:06:48.844 20:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:48.844 20:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:06:48.844 20:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:06:48.844 20:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:06:48.844 20:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:06:48.844 20:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:06:48.844 20:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:06:48.844 20:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:06:48.844 20:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:06:48.844 20:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:06:48.844 20:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:06:48.844 20:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:06:48.844 20:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=60800 00:06:48.844 20:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:48.844 20:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 60800' 00:06:48.844 Process raid pid: 60800 00:06:48.844 20:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 60800 00:06:48.845 20:02:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 60800 ']' 00:06:48.845 20:02:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:48.845 20:02:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:48.845 20:02:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:48.845 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:48.845 20:02:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:48.845 20:02:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:48.845 [2024-12-08 20:02:20.737042] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:06:48.845 [2024-12-08 20:02:20.737248] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:49.105 [2024-12-08 20:02:20.912883] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.105 [2024-12-08 20:02:21.052040] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.365 [2024-12-08 20:02:21.280808] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:49.365 [2024-12-08 20:02:21.280929] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:49.625 20:02:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:49.625 20:02:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:06:49.625 20:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:49.625 20:02:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:49.625 20:02:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:49.625 [2024-12-08 20:02:21.554269] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:49.625 [2024-12-08 20:02:21.554416] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:49.625 [2024-12-08 20:02:21.554451] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:49.625 [2024-12-08 20:02:21.554479] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:49.625 20:02:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:49.625 20:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:06:49.625 20:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:49.625 20:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:49.625 20:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:49.625 20:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:49.625 20:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:49.625 20:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:49.625 20:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:49.625 20:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:49.625 20:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:49.625 20:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:49.625 20:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:49.625 20:02:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:49.625 20:02:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:49.625 20:02:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:49.885 20:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:49.885 "name": "Existed_Raid", 00:06:49.885 "uuid": "8f42842d-e761-4b33-8775-cbe06670a7e6", 00:06:49.885 "strip_size_kb": 64, 00:06:49.885 "state": "configuring", 00:06:49.885 "raid_level": "raid0", 00:06:49.885 "superblock": true, 00:06:49.885 "num_base_bdevs": 2, 00:06:49.885 "num_base_bdevs_discovered": 0, 00:06:49.885 "num_base_bdevs_operational": 2, 00:06:49.885 "base_bdevs_list": [ 00:06:49.885 { 00:06:49.885 "name": "BaseBdev1", 00:06:49.885 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:49.885 "is_configured": false, 00:06:49.885 "data_offset": 0, 00:06:49.885 "data_size": 0 00:06:49.885 }, 00:06:49.885 { 00:06:49.885 "name": "BaseBdev2", 00:06:49.885 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:49.885 "is_configured": false, 00:06:49.885 "data_offset": 0, 00:06:49.885 "data_size": 0 00:06:49.885 } 00:06:49.885 ] 00:06:49.885 }' 00:06:49.885 20:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:49.885 20:02:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:50.145 20:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:06:50.145 20:02:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:50.145 20:02:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:50.145 [2024-12-08 20:02:21.989602] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:06:50.145 [2024-12-08 20:02:21.989760] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:06:50.145 20:02:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:50.145 20:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:50.145 20:02:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:50.145 20:02:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:50.145 [2024-12-08 20:02:22.001508] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:50.145 [2024-12-08 20:02:22.001623] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:50.145 [2024-12-08 20:02:22.001657] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:50.145 [2024-12-08 20:02:22.001691] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:50.145 20:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:50.145 20:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:06:50.145 20:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:50.145 20:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:50.145 [2024-12-08 20:02:22.054055] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:50.145 BaseBdev1 00:06:50.145 20:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:50.145 20:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:06:50.145 20:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:06:50.145 20:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:06:50.145 20:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:06:50.145 20:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:06:50.145 20:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:06:50.145 20:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:06:50.145 20:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:50.145 20:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:50.145 20:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:50.145 20:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:06:50.145 20:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:50.145 20:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:50.145 [ 00:06:50.145 { 00:06:50.145 "name": "BaseBdev1", 00:06:50.145 "aliases": [ 00:06:50.145 "db9fd320-3fbd-4cb5-a3fb-631e80920cf9" 00:06:50.145 ], 00:06:50.145 "product_name": "Malloc disk", 00:06:50.145 "block_size": 512, 00:06:50.145 "num_blocks": 65536, 00:06:50.145 "uuid": "db9fd320-3fbd-4cb5-a3fb-631e80920cf9", 00:06:50.145 "assigned_rate_limits": { 00:06:50.145 "rw_ios_per_sec": 0, 00:06:50.145 "rw_mbytes_per_sec": 0, 00:06:50.145 "r_mbytes_per_sec": 0, 00:06:50.145 "w_mbytes_per_sec": 0 00:06:50.145 }, 00:06:50.145 "claimed": true, 00:06:50.145 "claim_type": "exclusive_write", 00:06:50.145 "zoned": false, 00:06:50.145 "supported_io_types": { 00:06:50.145 "read": true, 00:06:50.145 "write": true, 00:06:50.145 "unmap": true, 00:06:50.145 "flush": true, 00:06:50.145 "reset": true, 00:06:50.145 "nvme_admin": false, 00:06:50.145 "nvme_io": false, 00:06:50.145 "nvme_io_md": false, 00:06:50.145 "write_zeroes": true, 00:06:50.145 "zcopy": true, 00:06:50.145 "get_zone_info": false, 00:06:50.145 "zone_management": false, 00:06:50.145 "zone_append": false, 00:06:50.145 "compare": false, 00:06:50.145 "compare_and_write": false, 00:06:50.145 "abort": true, 00:06:50.145 "seek_hole": false, 00:06:50.145 "seek_data": false, 00:06:50.145 "copy": true, 00:06:50.145 "nvme_iov_md": false 00:06:50.145 }, 00:06:50.145 "memory_domains": [ 00:06:50.145 { 00:06:50.145 "dma_device_id": "system", 00:06:50.145 "dma_device_type": 1 00:06:50.145 }, 00:06:50.145 { 00:06:50.145 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:50.145 "dma_device_type": 2 00:06:50.145 } 00:06:50.145 ], 00:06:50.145 "driver_specific": {} 00:06:50.145 } 00:06:50.145 ] 00:06:50.145 20:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:50.145 20:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:06:50.145 20:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:06:50.145 20:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:50.145 20:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:50.145 20:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:50.145 20:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:50.145 20:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:50.145 20:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:50.145 20:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:50.145 20:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:50.145 20:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:50.145 20:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:50.145 20:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:50.145 20:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:50.145 20:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:50.145 20:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:50.405 20:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:50.405 "name": "Existed_Raid", 00:06:50.405 "uuid": "6c699fff-4fdb-4a3b-abf1-de6ea23bde4e", 00:06:50.405 "strip_size_kb": 64, 00:06:50.405 "state": "configuring", 00:06:50.405 "raid_level": "raid0", 00:06:50.405 "superblock": true, 00:06:50.405 "num_base_bdevs": 2, 00:06:50.405 "num_base_bdevs_discovered": 1, 00:06:50.405 "num_base_bdevs_operational": 2, 00:06:50.405 "base_bdevs_list": [ 00:06:50.405 { 00:06:50.405 "name": "BaseBdev1", 00:06:50.405 "uuid": "db9fd320-3fbd-4cb5-a3fb-631e80920cf9", 00:06:50.405 "is_configured": true, 00:06:50.405 "data_offset": 2048, 00:06:50.405 "data_size": 63488 00:06:50.405 }, 00:06:50.405 { 00:06:50.405 "name": "BaseBdev2", 00:06:50.405 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:50.405 "is_configured": false, 00:06:50.405 "data_offset": 0, 00:06:50.405 "data_size": 0 00:06:50.405 } 00:06:50.405 ] 00:06:50.405 }' 00:06:50.405 20:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:50.405 20:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:50.665 20:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:06:50.665 20:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:50.665 20:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:50.665 [2024-12-08 20:02:22.565254] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:06:50.665 [2024-12-08 20:02:22.565432] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:06:50.665 20:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:50.665 20:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:50.665 20:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:50.665 20:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:50.665 [2024-12-08 20:02:22.577318] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:50.665 [2024-12-08 20:02:22.579656] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:50.665 [2024-12-08 20:02:22.579812] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:50.665 20:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:50.665 20:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:06:50.665 20:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:06:50.665 20:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:06:50.665 20:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:50.665 20:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:50.665 20:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:50.665 20:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:50.665 20:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:50.665 20:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:50.665 20:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:50.665 20:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:50.665 20:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:50.665 20:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:50.665 20:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:50.665 20:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:50.665 20:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:50.665 20:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:50.665 20:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:50.665 "name": "Existed_Raid", 00:06:50.665 "uuid": "2632ebd1-6356-4348-974c-30dc7a98c588", 00:06:50.665 "strip_size_kb": 64, 00:06:50.665 "state": "configuring", 00:06:50.665 "raid_level": "raid0", 00:06:50.665 "superblock": true, 00:06:50.665 "num_base_bdevs": 2, 00:06:50.665 "num_base_bdevs_discovered": 1, 00:06:50.665 "num_base_bdevs_operational": 2, 00:06:50.665 "base_bdevs_list": [ 00:06:50.665 { 00:06:50.665 "name": "BaseBdev1", 00:06:50.665 "uuid": "db9fd320-3fbd-4cb5-a3fb-631e80920cf9", 00:06:50.665 "is_configured": true, 00:06:50.665 "data_offset": 2048, 00:06:50.665 "data_size": 63488 00:06:50.665 }, 00:06:50.665 { 00:06:50.665 "name": "BaseBdev2", 00:06:50.666 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:50.666 "is_configured": false, 00:06:50.666 "data_offset": 0, 00:06:50.666 "data_size": 0 00:06:50.666 } 00:06:50.666 ] 00:06:50.666 }' 00:06:50.666 20:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:50.666 20:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:51.237 20:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:06:51.237 20:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.237 20:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:51.237 [2024-12-08 20:02:23.033317] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:06:51.237 [2024-12-08 20:02:23.033853] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:06:51.237 [2024-12-08 20:02:23.033917] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:06:51.237 [2024-12-08 20:02:23.034332] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:51.237 BaseBdev2 00:06:51.237 [2024-12-08 20:02:23.034628] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:06:51.237 [2024-12-08 20:02:23.034725] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:06:51.237 [2024-12-08 20:02:23.035030] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:51.237 20:02:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.237 20:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:06:51.237 20:02:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:06:51.237 20:02:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:06:51.237 20:02:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:06:51.237 20:02:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:06:51.237 20:02:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:06:51.237 20:02:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:06:51.237 20:02:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.237 20:02:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:51.237 20:02:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.237 20:02:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:06:51.237 20:02:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.237 20:02:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:51.237 [ 00:06:51.237 { 00:06:51.237 "name": "BaseBdev2", 00:06:51.237 "aliases": [ 00:06:51.237 "5ed5ddbc-0e70-4056-838c-27d43981d3b5" 00:06:51.237 ], 00:06:51.237 "product_name": "Malloc disk", 00:06:51.237 "block_size": 512, 00:06:51.237 "num_blocks": 65536, 00:06:51.237 "uuid": "5ed5ddbc-0e70-4056-838c-27d43981d3b5", 00:06:51.237 "assigned_rate_limits": { 00:06:51.237 "rw_ios_per_sec": 0, 00:06:51.237 "rw_mbytes_per_sec": 0, 00:06:51.237 "r_mbytes_per_sec": 0, 00:06:51.237 "w_mbytes_per_sec": 0 00:06:51.237 }, 00:06:51.237 "claimed": true, 00:06:51.237 "claim_type": "exclusive_write", 00:06:51.237 "zoned": false, 00:06:51.237 "supported_io_types": { 00:06:51.237 "read": true, 00:06:51.237 "write": true, 00:06:51.237 "unmap": true, 00:06:51.237 "flush": true, 00:06:51.237 "reset": true, 00:06:51.237 "nvme_admin": false, 00:06:51.237 "nvme_io": false, 00:06:51.237 "nvme_io_md": false, 00:06:51.237 "write_zeroes": true, 00:06:51.237 "zcopy": true, 00:06:51.237 "get_zone_info": false, 00:06:51.237 "zone_management": false, 00:06:51.237 "zone_append": false, 00:06:51.237 "compare": false, 00:06:51.237 "compare_and_write": false, 00:06:51.237 "abort": true, 00:06:51.237 "seek_hole": false, 00:06:51.237 "seek_data": false, 00:06:51.237 "copy": true, 00:06:51.237 "nvme_iov_md": false 00:06:51.237 }, 00:06:51.237 "memory_domains": [ 00:06:51.237 { 00:06:51.237 "dma_device_id": "system", 00:06:51.237 "dma_device_type": 1 00:06:51.237 }, 00:06:51.237 { 00:06:51.237 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:51.237 "dma_device_type": 2 00:06:51.237 } 00:06:51.237 ], 00:06:51.237 "driver_specific": {} 00:06:51.237 } 00:06:51.237 ] 00:06:51.237 20:02:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.237 20:02:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:06:51.237 20:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:06:51.237 20:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:06:51.237 20:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:06:51.237 20:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:51.237 20:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:51.237 20:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:51.237 20:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:51.237 20:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:51.237 20:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:51.237 20:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:51.237 20:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:51.237 20:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:51.237 20:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:51.237 20:02:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.237 20:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:51.237 20:02:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:51.237 20:02:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.237 20:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:51.237 "name": "Existed_Raid", 00:06:51.237 "uuid": "2632ebd1-6356-4348-974c-30dc7a98c588", 00:06:51.237 "strip_size_kb": 64, 00:06:51.237 "state": "online", 00:06:51.237 "raid_level": "raid0", 00:06:51.237 "superblock": true, 00:06:51.237 "num_base_bdevs": 2, 00:06:51.237 "num_base_bdevs_discovered": 2, 00:06:51.237 "num_base_bdevs_operational": 2, 00:06:51.238 "base_bdevs_list": [ 00:06:51.238 { 00:06:51.238 "name": "BaseBdev1", 00:06:51.238 "uuid": "db9fd320-3fbd-4cb5-a3fb-631e80920cf9", 00:06:51.238 "is_configured": true, 00:06:51.238 "data_offset": 2048, 00:06:51.238 "data_size": 63488 00:06:51.238 }, 00:06:51.238 { 00:06:51.238 "name": "BaseBdev2", 00:06:51.238 "uuid": "5ed5ddbc-0e70-4056-838c-27d43981d3b5", 00:06:51.238 "is_configured": true, 00:06:51.238 "data_offset": 2048, 00:06:51.238 "data_size": 63488 00:06:51.238 } 00:06:51.238 ] 00:06:51.238 }' 00:06:51.238 20:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:51.238 20:02:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:51.808 20:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:06:51.808 20:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:06:51.808 20:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:06:51.808 20:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:06:51.808 20:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:06:51.808 20:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:06:51.808 20:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:06:51.808 20:02:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.808 20:02:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:51.808 20:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:06:51.808 [2024-12-08 20:02:23.532826] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:51.808 20:02:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.808 20:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:06:51.808 "name": "Existed_Raid", 00:06:51.808 "aliases": [ 00:06:51.808 "2632ebd1-6356-4348-974c-30dc7a98c588" 00:06:51.808 ], 00:06:51.808 "product_name": "Raid Volume", 00:06:51.808 "block_size": 512, 00:06:51.808 "num_blocks": 126976, 00:06:51.808 "uuid": "2632ebd1-6356-4348-974c-30dc7a98c588", 00:06:51.808 "assigned_rate_limits": { 00:06:51.808 "rw_ios_per_sec": 0, 00:06:51.808 "rw_mbytes_per_sec": 0, 00:06:51.808 "r_mbytes_per_sec": 0, 00:06:51.808 "w_mbytes_per_sec": 0 00:06:51.808 }, 00:06:51.808 "claimed": false, 00:06:51.808 "zoned": false, 00:06:51.808 "supported_io_types": { 00:06:51.808 "read": true, 00:06:51.808 "write": true, 00:06:51.808 "unmap": true, 00:06:51.808 "flush": true, 00:06:51.808 "reset": true, 00:06:51.808 "nvme_admin": false, 00:06:51.808 "nvme_io": false, 00:06:51.808 "nvme_io_md": false, 00:06:51.808 "write_zeroes": true, 00:06:51.808 "zcopy": false, 00:06:51.808 "get_zone_info": false, 00:06:51.808 "zone_management": false, 00:06:51.808 "zone_append": false, 00:06:51.808 "compare": false, 00:06:51.808 "compare_and_write": false, 00:06:51.808 "abort": false, 00:06:51.808 "seek_hole": false, 00:06:51.808 "seek_data": false, 00:06:51.808 "copy": false, 00:06:51.808 "nvme_iov_md": false 00:06:51.808 }, 00:06:51.808 "memory_domains": [ 00:06:51.808 { 00:06:51.809 "dma_device_id": "system", 00:06:51.809 "dma_device_type": 1 00:06:51.809 }, 00:06:51.809 { 00:06:51.809 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:51.809 "dma_device_type": 2 00:06:51.809 }, 00:06:51.809 { 00:06:51.809 "dma_device_id": "system", 00:06:51.809 "dma_device_type": 1 00:06:51.809 }, 00:06:51.809 { 00:06:51.809 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:51.809 "dma_device_type": 2 00:06:51.809 } 00:06:51.809 ], 00:06:51.809 "driver_specific": { 00:06:51.809 "raid": { 00:06:51.809 "uuid": "2632ebd1-6356-4348-974c-30dc7a98c588", 00:06:51.809 "strip_size_kb": 64, 00:06:51.809 "state": "online", 00:06:51.809 "raid_level": "raid0", 00:06:51.809 "superblock": true, 00:06:51.809 "num_base_bdevs": 2, 00:06:51.809 "num_base_bdevs_discovered": 2, 00:06:51.809 "num_base_bdevs_operational": 2, 00:06:51.809 "base_bdevs_list": [ 00:06:51.809 { 00:06:51.809 "name": "BaseBdev1", 00:06:51.809 "uuid": "db9fd320-3fbd-4cb5-a3fb-631e80920cf9", 00:06:51.809 "is_configured": true, 00:06:51.809 "data_offset": 2048, 00:06:51.809 "data_size": 63488 00:06:51.809 }, 00:06:51.809 { 00:06:51.809 "name": "BaseBdev2", 00:06:51.809 "uuid": "5ed5ddbc-0e70-4056-838c-27d43981d3b5", 00:06:51.809 "is_configured": true, 00:06:51.809 "data_offset": 2048, 00:06:51.809 "data_size": 63488 00:06:51.809 } 00:06:51.809 ] 00:06:51.809 } 00:06:51.809 } 00:06:51.809 }' 00:06:51.809 20:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:06:51.809 20:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:06:51.809 BaseBdev2' 00:06:51.809 20:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:51.809 20:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:06:51.809 20:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:51.809 20:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:06:51.809 20:02:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.809 20:02:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:51.809 20:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:51.809 20:02:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.809 20:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:51.809 20:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:51.809 20:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:51.809 20:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:06:51.809 20:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:51.809 20:02:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.809 20:02:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:51.809 20:02:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.809 20:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:51.809 20:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:51.809 20:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:06:51.809 20:02:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.809 20:02:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:51.809 [2024-12-08 20:02:23.752158] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:06:51.809 [2024-12-08 20:02:23.752272] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:51.809 [2024-12-08 20:02:23.752367] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:52.069 20:02:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.069 20:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:06:52.069 20:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:06:52.069 20:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:06:52.069 20:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:06:52.069 20:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:06:52.069 20:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:06:52.069 20:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:52.069 20:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:06:52.069 20:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:52.070 20:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:52.070 20:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:06:52.070 20:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:52.070 20:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:52.070 20:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:52.070 20:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:52.070 20:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:52.070 20:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:52.070 20:02:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.070 20:02:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:52.070 20:02:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.070 20:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:52.070 "name": "Existed_Raid", 00:06:52.070 "uuid": "2632ebd1-6356-4348-974c-30dc7a98c588", 00:06:52.070 "strip_size_kb": 64, 00:06:52.070 "state": "offline", 00:06:52.070 "raid_level": "raid0", 00:06:52.070 "superblock": true, 00:06:52.070 "num_base_bdevs": 2, 00:06:52.070 "num_base_bdevs_discovered": 1, 00:06:52.070 "num_base_bdevs_operational": 1, 00:06:52.070 "base_bdevs_list": [ 00:06:52.070 { 00:06:52.070 "name": null, 00:06:52.070 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:52.070 "is_configured": false, 00:06:52.070 "data_offset": 0, 00:06:52.070 "data_size": 63488 00:06:52.070 }, 00:06:52.070 { 00:06:52.070 "name": "BaseBdev2", 00:06:52.070 "uuid": "5ed5ddbc-0e70-4056-838c-27d43981d3b5", 00:06:52.070 "is_configured": true, 00:06:52.070 "data_offset": 2048, 00:06:52.070 "data_size": 63488 00:06:52.070 } 00:06:52.070 ] 00:06:52.070 }' 00:06:52.070 20:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:52.070 20:02:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:52.330 20:02:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:06:52.330 20:02:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:06:52.330 20:02:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:06:52.330 20:02:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:52.330 20:02:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.330 20:02:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:52.330 20:02:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.330 20:02:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:06:52.330 20:02:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:06:52.330 20:02:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:06:52.330 20:02:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.330 20:02:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:52.591 [2024-12-08 20:02:24.308723] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:06:52.591 [2024-12-08 20:02:24.308822] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:06:52.591 20:02:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.591 20:02:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:06:52.591 20:02:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:06:52.591 20:02:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:52.591 20:02:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:06:52.591 20:02:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.591 20:02:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:52.591 20:02:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.591 20:02:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:06:52.591 20:02:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:06:52.591 20:02:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:06:52.591 20:02:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 60800 00:06:52.591 20:02:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 60800 ']' 00:06:52.591 20:02:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 60800 00:06:52.591 20:02:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:06:52.591 20:02:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:52.591 20:02:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60800 00:06:52.591 killing process with pid 60800 00:06:52.591 20:02:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:52.591 20:02:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:52.591 20:02:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60800' 00:06:52.591 20:02:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 60800 00:06:52.591 [2024-12-08 20:02:24.495619] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:52.591 20:02:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 60800 00:06:52.591 [2024-12-08 20:02:24.513286] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:54.005 ************************************ 00:06:54.005 END TEST raid_state_function_test_sb 00:06:54.005 ************************************ 00:06:54.005 20:02:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:06:54.005 00:06:54.005 real 0m5.084s 00:06:54.005 user 0m7.148s 00:06:54.005 sys 0m0.842s 00:06:54.005 20:02:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:54.005 20:02:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:54.005 20:02:25 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:06:54.005 20:02:25 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:54.005 20:02:25 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:54.005 20:02:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:54.005 ************************************ 00:06:54.005 START TEST raid_superblock_test 00:06:54.005 ************************************ 00:06:54.005 20:02:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 2 00:06:54.005 20:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:06:54.005 20:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:06:54.005 20:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:06:54.005 20:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:06:54.005 20:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:06:54.005 20:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:06:54.005 20:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:06:54.005 20:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:06:54.005 20:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:06:54.005 20:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:06:54.005 20:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:06:54.005 20:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:06:54.005 20:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:06:54.005 20:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:06:54.005 20:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:06:54.005 20:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:06:54.005 20:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=61051 00:06:54.005 20:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:06:54.005 20:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 61051 00:06:54.005 20:02:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 61051 ']' 00:06:54.005 20:02:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:54.005 20:02:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:54.005 20:02:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:54.005 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:54.005 20:02:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:54.005 20:02:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:54.005 [2024-12-08 20:02:25.880582] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:06:54.005 [2024-12-08 20:02:25.880771] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61051 ] 00:06:54.283 [2024-12-08 20:02:26.030360] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.283 [2024-12-08 20:02:26.169260] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.542 [2024-12-08 20:02:26.403172] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:54.542 [2024-12-08 20:02:26.403249] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:54.801 20:02:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:54.801 20:02:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:06:54.801 20:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:06:54.801 20:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:06:54.801 20:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:06:54.801 20:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:06:54.801 20:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:06:54.801 20:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:06:54.801 20:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:06:54.801 20:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:06:54.802 20:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:06:54.802 20:02:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.802 20:02:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:54.802 malloc1 00:06:54.802 20:02:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.802 20:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:06:54.802 20:02:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.802 20:02:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:54.802 [2024-12-08 20:02:26.770377] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:06:54.802 [2024-12-08 20:02:26.770538] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:54.802 [2024-12-08 20:02:26.770569] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:06:54.802 [2024-12-08 20:02:26.770582] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:54.802 [2024-12-08 20:02:26.773070] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:54.802 [2024-12-08 20:02:26.773113] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:06:54.802 pt1 00:06:54.802 20:02:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.802 20:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:06:54.802 20:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:06:54.802 20:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:06:54.802 20:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:06:54.802 20:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:06:54.802 20:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:06:54.802 20:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:06:54.802 20:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:06:54.802 20:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:06:55.061 20:02:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.061 20:02:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.061 malloc2 00:06:55.061 20:02:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:55.061 20:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:06:55.061 20:02:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.061 20:02:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.061 [2024-12-08 20:02:26.830472] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:06:55.061 [2024-12-08 20:02:26.830618] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:55.061 [2024-12-08 20:02:26.830670] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:06:55.061 [2024-12-08 20:02:26.830707] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:55.061 [2024-12-08 20:02:26.833139] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:55.061 [2024-12-08 20:02:26.833218] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:06:55.061 pt2 00:06:55.061 20:02:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:55.061 20:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:06:55.061 20:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:06:55.061 20:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:06:55.061 20:02:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.061 20:02:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.061 [2024-12-08 20:02:26.842536] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:06:55.061 [2024-12-08 20:02:26.844702] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:06:55.061 [2024-12-08 20:02:26.844917] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:55.061 [2024-12-08 20:02:26.844984] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:06:55.061 [2024-12-08 20:02:26.845301] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:55.061 [2024-12-08 20:02:26.845540] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:55.061 [2024-12-08 20:02:26.845591] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:06:55.061 [2024-12-08 20:02:26.845858] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:55.061 20:02:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:55.061 20:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:06:55.061 20:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:06:55.061 20:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:55.061 20:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:55.061 20:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:55.061 20:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:55.062 20:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:55.062 20:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:55.062 20:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:55.062 20:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:55.062 20:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:55.062 20:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:06:55.062 20:02:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.062 20:02:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.062 20:02:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:55.062 20:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:55.062 "name": "raid_bdev1", 00:06:55.062 "uuid": "929f11db-b99d-4e29-862b-b1d310b6177b", 00:06:55.062 "strip_size_kb": 64, 00:06:55.062 "state": "online", 00:06:55.062 "raid_level": "raid0", 00:06:55.062 "superblock": true, 00:06:55.062 "num_base_bdevs": 2, 00:06:55.062 "num_base_bdevs_discovered": 2, 00:06:55.062 "num_base_bdevs_operational": 2, 00:06:55.062 "base_bdevs_list": [ 00:06:55.062 { 00:06:55.062 "name": "pt1", 00:06:55.062 "uuid": "00000000-0000-0000-0000-000000000001", 00:06:55.062 "is_configured": true, 00:06:55.062 "data_offset": 2048, 00:06:55.062 "data_size": 63488 00:06:55.062 }, 00:06:55.062 { 00:06:55.062 "name": "pt2", 00:06:55.062 "uuid": "00000000-0000-0000-0000-000000000002", 00:06:55.062 "is_configured": true, 00:06:55.062 "data_offset": 2048, 00:06:55.062 "data_size": 63488 00:06:55.062 } 00:06:55.062 ] 00:06:55.062 }' 00:06:55.062 20:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:55.062 20:02:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.321 20:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:06:55.321 20:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:06:55.321 20:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:06:55.321 20:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:06:55.321 20:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:06:55.321 20:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:06:55.321 20:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:06:55.321 20:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:06:55.321 20:02:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.321 20:02:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.321 [2024-12-08 20:02:27.290190] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:55.581 20:02:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:55.581 20:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:06:55.581 "name": "raid_bdev1", 00:06:55.581 "aliases": [ 00:06:55.581 "929f11db-b99d-4e29-862b-b1d310b6177b" 00:06:55.581 ], 00:06:55.581 "product_name": "Raid Volume", 00:06:55.581 "block_size": 512, 00:06:55.581 "num_blocks": 126976, 00:06:55.581 "uuid": "929f11db-b99d-4e29-862b-b1d310b6177b", 00:06:55.581 "assigned_rate_limits": { 00:06:55.581 "rw_ios_per_sec": 0, 00:06:55.581 "rw_mbytes_per_sec": 0, 00:06:55.581 "r_mbytes_per_sec": 0, 00:06:55.581 "w_mbytes_per_sec": 0 00:06:55.581 }, 00:06:55.581 "claimed": false, 00:06:55.581 "zoned": false, 00:06:55.581 "supported_io_types": { 00:06:55.581 "read": true, 00:06:55.581 "write": true, 00:06:55.581 "unmap": true, 00:06:55.581 "flush": true, 00:06:55.581 "reset": true, 00:06:55.581 "nvme_admin": false, 00:06:55.581 "nvme_io": false, 00:06:55.582 "nvme_io_md": false, 00:06:55.582 "write_zeroes": true, 00:06:55.582 "zcopy": false, 00:06:55.582 "get_zone_info": false, 00:06:55.582 "zone_management": false, 00:06:55.582 "zone_append": false, 00:06:55.582 "compare": false, 00:06:55.582 "compare_and_write": false, 00:06:55.582 "abort": false, 00:06:55.582 "seek_hole": false, 00:06:55.582 "seek_data": false, 00:06:55.582 "copy": false, 00:06:55.582 "nvme_iov_md": false 00:06:55.582 }, 00:06:55.582 "memory_domains": [ 00:06:55.582 { 00:06:55.582 "dma_device_id": "system", 00:06:55.582 "dma_device_type": 1 00:06:55.582 }, 00:06:55.582 { 00:06:55.582 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:55.582 "dma_device_type": 2 00:06:55.582 }, 00:06:55.582 { 00:06:55.582 "dma_device_id": "system", 00:06:55.582 "dma_device_type": 1 00:06:55.582 }, 00:06:55.582 { 00:06:55.582 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:55.582 "dma_device_type": 2 00:06:55.582 } 00:06:55.582 ], 00:06:55.582 "driver_specific": { 00:06:55.582 "raid": { 00:06:55.582 "uuid": "929f11db-b99d-4e29-862b-b1d310b6177b", 00:06:55.582 "strip_size_kb": 64, 00:06:55.582 "state": "online", 00:06:55.582 "raid_level": "raid0", 00:06:55.582 "superblock": true, 00:06:55.582 "num_base_bdevs": 2, 00:06:55.582 "num_base_bdevs_discovered": 2, 00:06:55.582 "num_base_bdevs_operational": 2, 00:06:55.582 "base_bdevs_list": [ 00:06:55.582 { 00:06:55.582 "name": "pt1", 00:06:55.582 "uuid": "00000000-0000-0000-0000-000000000001", 00:06:55.582 "is_configured": true, 00:06:55.582 "data_offset": 2048, 00:06:55.582 "data_size": 63488 00:06:55.582 }, 00:06:55.582 { 00:06:55.582 "name": "pt2", 00:06:55.582 "uuid": "00000000-0000-0000-0000-000000000002", 00:06:55.582 "is_configured": true, 00:06:55.582 "data_offset": 2048, 00:06:55.582 "data_size": 63488 00:06:55.582 } 00:06:55.582 ] 00:06:55.582 } 00:06:55.582 } 00:06:55.582 }' 00:06:55.582 20:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:06:55.582 20:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:06:55.582 pt2' 00:06:55.582 20:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:55.582 20:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:06:55.582 20:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:55.582 20:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:06:55.582 20:02:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.582 20:02:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.582 20:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:55.582 20:02:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:55.582 20:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:55.582 20:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:55.582 20:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:55.582 20:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:06:55.582 20:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:55.582 20:02:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.582 20:02:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.582 20:02:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:55.582 20:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:55.582 20:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:55.582 20:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:06:55.582 20:02:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.582 20:02:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.582 20:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:06:55.582 [2024-12-08 20:02:27.525685] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:55.582 20:02:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:55.841 20:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=929f11db-b99d-4e29-862b-b1d310b6177b 00:06:55.841 20:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 929f11db-b99d-4e29-862b-b1d310b6177b ']' 00:06:55.841 20:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:06:55.841 20:02:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.841 20:02:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.841 [2024-12-08 20:02:27.577224] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:06:55.841 [2024-12-08 20:02:27.577267] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:55.841 [2024-12-08 20:02:27.577403] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:55.841 [2024-12-08 20:02:27.577499] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:55.841 [2024-12-08 20:02:27.577530] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:06:55.841 20:02:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:55.841 20:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:55.841 20:02:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.841 20:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:06:55.841 20:02:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.841 20:02:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:55.841 20:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:06:55.841 20:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:06:55.841 20:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:06:55.841 20:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:06:55.841 20:02:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.841 20:02:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.841 20:02:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:55.841 20:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:06:55.841 20:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:06:55.841 20:02:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.841 20:02:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.841 20:02:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:55.841 20:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:06:55.841 20:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:06:55.841 20:02:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.841 20:02:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.841 20:02:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:55.841 20:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:06:55.841 20:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:06:55.841 20:02:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:06:55.841 20:02:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:06:55.841 20:02:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:55.841 20:02:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:55.841 20:02:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:55.841 20:02:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:55.841 20:02:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:06:55.841 20:02:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.841 20:02:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.841 [2024-12-08 20:02:27.713120] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:06:55.841 [2024-12-08 20:02:27.715419] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:06:55.841 [2024-12-08 20:02:27.715510] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:06:55.841 [2024-12-08 20:02:27.715581] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:06:55.841 [2024-12-08 20:02:27.715599] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:06:55.841 [2024-12-08 20:02:27.715617] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:06:55.841 request: 00:06:55.841 { 00:06:55.841 "name": "raid_bdev1", 00:06:55.841 "raid_level": "raid0", 00:06:55.841 "base_bdevs": [ 00:06:55.841 "malloc1", 00:06:55.841 "malloc2" 00:06:55.841 ], 00:06:55.841 "strip_size_kb": 64, 00:06:55.841 "superblock": false, 00:06:55.841 "method": "bdev_raid_create", 00:06:55.841 "req_id": 1 00:06:55.841 } 00:06:55.841 Got JSON-RPC error response 00:06:55.841 response: 00:06:55.841 { 00:06:55.841 "code": -17, 00:06:55.841 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:06:55.841 } 00:06:55.841 20:02:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:55.841 20:02:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:06:55.841 20:02:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:55.841 20:02:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:55.841 20:02:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:55.841 20:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:55.841 20:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:06:55.841 20:02:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.841 20:02:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.841 20:02:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:55.841 20:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:06:55.841 20:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:06:55.841 20:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:06:55.842 20:02:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.842 20:02:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.842 [2024-12-08 20:02:27.776889] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:06:55.842 [2024-12-08 20:02:27.776959] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:55.842 [2024-12-08 20:02:27.776981] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:06:55.842 [2024-12-08 20:02:27.776994] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:55.842 [2024-12-08 20:02:27.779551] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:55.842 [2024-12-08 20:02:27.779592] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:06:55.842 [2024-12-08 20:02:27.779678] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:06:55.842 [2024-12-08 20:02:27.779731] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:06:55.842 pt1 00:06:55.842 20:02:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:55.842 20:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:06:55.842 20:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:06:55.842 20:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:55.842 20:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:55.842 20:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:55.842 20:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:55.842 20:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:55.842 20:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:55.842 20:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:55.842 20:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:55.842 20:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:55.842 20:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:06:55.842 20:02:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.842 20:02:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.842 20:02:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.101 20:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:56.101 "name": "raid_bdev1", 00:06:56.101 "uuid": "929f11db-b99d-4e29-862b-b1d310b6177b", 00:06:56.101 "strip_size_kb": 64, 00:06:56.101 "state": "configuring", 00:06:56.101 "raid_level": "raid0", 00:06:56.101 "superblock": true, 00:06:56.101 "num_base_bdevs": 2, 00:06:56.101 "num_base_bdevs_discovered": 1, 00:06:56.101 "num_base_bdevs_operational": 2, 00:06:56.101 "base_bdevs_list": [ 00:06:56.101 { 00:06:56.101 "name": "pt1", 00:06:56.101 "uuid": "00000000-0000-0000-0000-000000000001", 00:06:56.101 "is_configured": true, 00:06:56.101 "data_offset": 2048, 00:06:56.101 "data_size": 63488 00:06:56.101 }, 00:06:56.101 { 00:06:56.101 "name": null, 00:06:56.101 "uuid": "00000000-0000-0000-0000-000000000002", 00:06:56.101 "is_configured": false, 00:06:56.101 "data_offset": 2048, 00:06:56.101 "data_size": 63488 00:06:56.101 } 00:06:56.101 ] 00:06:56.101 }' 00:06:56.101 20:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:56.101 20:02:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.360 20:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:06:56.360 20:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:06:56.360 20:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:06:56.360 20:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:06:56.360 20:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.360 20:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.360 [2024-12-08 20:02:28.192250] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:06:56.360 [2024-12-08 20:02:28.192359] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:56.360 [2024-12-08 20:02:28.192397] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:06:56.360 [2024-12-08 20:02:28.192424] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:56.360 [2024-12-08 20:02:28.193122] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:56.360 [2024-12-08 20:02:28.193160] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:06:56.360 [2024-12-08 20:02:28.193296] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:06:56.360 [2024-12-08 20:02:28.193342] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:06:56.360 [2024-12-08 20:02:28.193523] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:06:56.360 [2024-12-08 20:02:28.193546] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:06:56.360 [2024-12-08 20:02:28.193852] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:06:56.360 [2024-12-08 20:02:28.194078] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:06:56.360 [2024-12-08 20:02:28.194097] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:06:56.361 [2024-12-08 20:02:28.194293] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:56.361 pt2 00:06:56.361 20:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.361 20:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:06:56.361 20:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:06:56.361 20:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:06:56.361 20:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:06:56.361 20:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:56.361 20:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:56.361 20:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:56.361 20:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:56.361 20:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:56.361 20:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:56.361 20:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:56.361 20:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:56.361 20:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:06:56.361 20:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:56.361 20:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.361 20:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.361 20:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.361 20:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:56.361 "name": "raid_bdev1", 00:06:56.361 "uuid": "929f11db-b99d-4e29-862b-b1d310b6177b", 00:06:56.361 "strip_size_kb": 64, 00:06:56.361 "state": "online", 00:06:56.361 "raid_level": "raid0", 00:06:56.361 "superblock": true, 00:06:56.361 "num_base_bdevs": 2, 00:06:56.361 "num_base_bdevs_discovered": 2, 00:06:56.361 "num_base_bdevs_operational": 2, 00:06:56.361 "base_bdevs_list": [ 00:06:56.361 { 00:06:56.361 "name": "pt1", 00:06:56.361 "uuid": "00000000-0000-0000-0000-000000000001", 00:06:56.361 "is_configured": true, 00:06:56.361 "data_offset": 2048, 00:06:56.361 "data_size": 63488 00:06:56.361 }, 00:06:56.361 { 00:06:56.361 "name": "pt2", 00:06:56.361 "uuid": "00000000-0000-0000-0000-000000000002", 00:06:56.361 "is_configured": true, 00:06:56.361 "data_offset": 2048, 00:06:56.361 "data_size": 63488 00:06:56.361 } 00:06:56.361 ] 00:06:56.361 }' 00:06:56.361 20:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:56.361 20:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.929 20:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:06:56.929 20:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:06:56.929 20:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:06:56.929 20:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:06:56.929 20:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:06:56.929 20:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:06:56.929 20:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:06:56.929 20:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.929 20:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.929 20:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:06:56.929 [2024-12-08 20:02:28.659723] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:56.929 20:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.929 20:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:06:56.929 "name": "raid_bdev1", 00:06:56.929 "aliases": [ 00:06:56.929 "929f11db-b99d-4e29-862b-b1d310b6177b" 00:06:56.929 ], 00:06:56.929 "product_name": "Raid Volume", 00:06:56.929 "block_size": 512, 00:06:56.929 "num_blocks": 126976, 00:06:56.930 "uuid": "929f11db-b99d-4e29-862b-b1d310b6177b", 00:06:56.930 "assigned_rate_limits": { 00:06:56.930 "rw_ios_per_sec": 0, 00:06:56.930 "rw_mbytes_per_sec": 0, 00:06:56.930 "r_mbytes_per_sec": 0, 00:06:56.930 "w_mbytes_per_sec": 0 00:06:56.930 }, 00:06:56.930 "claimed": false, 00:06:56.930 "zoned": false, 00:06:56.930 "supported_io_types": { 00:06:56.930 "read": true, 00:06:56.930 "write": true, 00:06:56.930 "unmap": true, 00:06:56.930 "flush": true, 00:06:56.930 "reset": true, 00:06:56.930 "nvme_admin": false, 00:06:56.930 "nvme_io": false, 00:06:56.930 "nvme_io_md": false, 00:06:56.930 "write_zeroes": true, 00:06:56.930 "zcopy": false, 00:06:56.930 "get_zone_info": false, 00:06:56.930 "zone_management": false, 00:06:56.930 "zone_append": false, 00:06:56.930 "compare": false, 00:06:56.930 "compare_and_write": false, 00:06:56.930 "abort": false, 00:06:56.930 "seek_hole": false, 00:06:56.930 "seek_data": false, 00:06:56.930 "copy": false, 00:06:56.930 "nvme_iov_md": false 00:06:56.930 }, 00:06:56.930 "memory_domains": [ 00:06:56.930 { 00:06:56.930 "dma_device_id": "system", 00:06:56.930 "dma_device_type": 1 00:06:56.930 }, 00:06:56.930 { 00:06:56.930 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:56.930 "dma_device_type": 2 00:06:56.930 }, 00:06:56.930 { 00:06:56.930 "dma_device_id": "system", 00:06:56.930 "dma_device_type": 1 00:06:56.930 }, 00:06:56.930 { 00:06:56.930 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:56.930 "dma_device_type": 2 00:06:56.930 } 00:06:56.930 ], 00:06:56.930 "driver_specific": { 00:06:56.930 "raid": { 00:06:56.930 "uuid": "929f11db-b99d-4e29-862b-b1d310b6177b", 00:06:56.930 "strip_size_kb": 64, 00:06:56.930 "state": "online", 00:06:56.930 "raid_level": "raid0", 00:06:56.930 "superblock": true, 00:06:56.930 "num_base_bdevs": 2, 00:06:56.930 "num_base_bdevs_discovered": 2, 00:06:56.930 "num_base_bdevs_operational": 2, 00:06:56.930 "base_bdevs_list": [ 00:06:56.930 { 00:06:56.930 "name": "pt1", 00:06:56.930 "uuid": "00000000-0000-0000-0000-000000000001", 00:06:56.930 "is_configured": true, 00:06:56.930 "data_offset": 2048, 00:06:56.930 "data_size": 63488 00:06:56.930 }, 00:06:56.930 { 00:06:56.930 "name": "pt2", 00:06:56.930 "uuid": "00000000-0000-0000-0000-000000000002", 00:06:56.930 "is_configured": true, 00:06:56.930 "data_offset": 2048, 00:06:56.930 "data_size": 63488 00:06:56.930 } 00:06:56.930 ] 00:06:56.930 } 00:06:56.930 } 00:06:56.930 }' 00:06:56.930 20:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:06:56.930 20:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:06:56.930 pt2' 00:06:56.930 20:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:56.930 20:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:06:56.930 20:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:56.930 20:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:56.930 20:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:06:56.930 20:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.930 20:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.930 20:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.930 20:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:56.930 20:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:56.930 20:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:56.930 20:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:06:56.930 20:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.930 20:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.930 20:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:56.930 20:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.930 20:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:56.930 20:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:56.930 20:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:06:56.930 20:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:06:56.930 20:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.930 20:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.930 [2024-12-08 20:02:28.887393] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:57.189 20:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.189 20:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 929f11db-b99d-4e29-862b-b1d310b6177b '!=' 929f11db-b99d-4e29-862b-b1d310b6177b ']' 00:06:57.189 20:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:06:57.189 20:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:06:57.189 20:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:06:57.189 20:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 61051 00:06:57.189 20:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 61051 ']' 00:06:57.190 20:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 61051 00:06:57.190 20:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:06:57.190 20:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:57.190 20:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61051 00:06:57.190 20:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:57.190 20:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:57.190 20:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61051' 00:06:57.190 killing process with pid 61051 00:06:57.190 20:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 61051 00:06:57.190 [2024-12-08 20:02:28.951392] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:57.190 [2024-12-08 20:02:28.951523] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:57.190 20:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 61051 00:06:57.190 [2024-12-08 20:02:28.951591] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:57.190 [2024-12-08 20:02:28.951612] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:06:57.449 [2024-12-08 20:02:29.171027] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:58.831 20:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:06:58.831 00:06:58.831 real 0m4.612s 00:06:58.831 user 0m6.314s 00:06:58.831 sys 0m0.814s 00:06:58.831 20:02:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:58.831 20:02:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:58.831 ************************************ 00:06:58.831 END TEST raid_superblock_test 00:06:58.831 ************************************ 00:06:58.831 20:02:30 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:06:58.831 20:02:30 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:06:58.831 20:02:30 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:58.831 20:02:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:58.831 ************************************ 00:06:58.831 START TEST raid_read_error_test 00:06:58.831 ************************************ 00:06:58.831 20:02:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 2 read 00:06:58.831 20:02:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:06:58.831 20:02:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:06:58.831 20:02:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:06:58.831 20:02:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:06:58.832 20:02:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:06:58.832 20:02:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:06:58.832 20:02:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:06:58.832 20:02:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:06:58.832 20:02:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:06:58.832 20:02:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:06:58.832 20:02:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:06:58.832 20:02:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:06:58.832 20:02:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:06:58.832 20:02:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:06:58.832 20:02:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:06:58.832 20:02:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:06:58.832 20:02:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:06:58.832 20:02:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:06:58.832 20:02:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:06:58.832 20:02:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:06:58.832 20:02:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:06:58.832 20:02:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:06:58.832 20:02:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.w5XUr8kJEb 00:06:58.832 20:02:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61264 00:06:58.832 20:02:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61264 00:06:58.832 20:02:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:06:58.832 20:02:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 61264 ']' 00:06:58.832 20:02:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:58.832 20:02:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:58.832 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:58.832 20:02:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:58.832 20:02:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:58.832 20:02:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:58.832 [2024-12-08 20:02:30.575696] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:06:58.832 [2024-12-08 20:02:30.575815] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61264 ] 00:06:58.832 [2024-12-08 20:02:30.749120] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.092 [2024-12-08 20:02:30.888893] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.352 [2024-12-08 20:02:31.128210] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:59.352 [2024-12-08 20:02:31.128259] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:59.613 20:02:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:59.613 20:02:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:06:59.613 20:02:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:06:59.613 20:02:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:06:59.613 20:02:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:59.613 20:02:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.613 BaseBdev1_malloc 00:06:59.613 20:02:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:59.613 20:02:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:06:59.613 20:02:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:59.613 20:02:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.613 true 00:06:59.613 20:02:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:59.613 20:02:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:06:59.613 20:02:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:59.613 20:02:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.613 [2024-12-08 20:02:31.456637] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:06:59.613 [2024-12-08 20:02:31.456722] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:59.613 [2024-12-08 20:02:31.456747] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:06:59.613 [2024-12-08 20:02:31.456781] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:59.613 [2024-12-08 20:02:31.459628] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:59.613 [2024-12-08 20:02:31.459682] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:06:59.613 BaseBdev1 00:06:59.613 20:02:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:59.613 20:02:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:06:59.613 20:02:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:06:59.613 20:02:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:59.613 20:02:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.613 BaseBdev2_malloc 00:06:59.613 20:02:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:59.613 20:02:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:06:59.613 20:02:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:59.613 20:02:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.613 true 00:06:59.613 20:02:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:59.613 20:02:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:06:59.613 20:02:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:59.613 20:02:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.613 [2024-12-08 20:02:31.531595] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:06:59.613 [2024-12-08 20:02:31.531665] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:59.613 [2024-12-08 20:02:31.531684] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:06:59.613 [2024-12-08 20:02:31.531699] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:59.613 [2024-12-08 20:02:31.534142] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:59.613 [2024-12-08 20:02:31.534186] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:06:59.613 BaseBdev2 00:06:59.613 20:02:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:59.613 20:02:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:06:59.613 20:02:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:59.613 20:02:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.613 [2024-12-08 20:02:31.543655] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:59.613 [2024-12-08 20:02:31.545799] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:06:59.613 [2024-12-08 20:02:31.546078] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:06:59.613 [2024-12-08 20:02:31.546128] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:06:59.613 [2024-12-08 20:02:31.546404] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:06:59.613 [2024-12-08 20:02:31.546634] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:06:59.613 [2024-12-08 20:02:31.546658] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:06:59.613 [2024-12-08 20:02:31.546846] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:59.613 20:02:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:59.613 20:02:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:06:59.613 20:02:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:06:59.613 20:02:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:59.613 20:02:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:59.613 20:02:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:59.613 20:02:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:59.613 20:02:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:59.613 20:02:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:59.613 20:02:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:59.613 20:02:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:59.613 20:02:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:59.613 20:02:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:06:59.613 20:02:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:59.613 20:02:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.613 20:02:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:59.873 20:02:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:59.873 "name": "raid_bdev1", 00:06:59.873 "uuid": "592d0504-6822-4776-81e9-80e426a84286", 00:06:59.873 "strip_size_kb": 64, 00:06:59.873 "state": "online", 00:06:59.873 "raid_level": "raid0", 00:06:59.873 "superblock": true, 00:06:59.873 "num_base_bdevs": 2, 00:06:59.873 "num_base_bdevs_discovered": 2, 00:06:59.873 "num_base_bdevs_operational": 2, 00:06:59.873 "base_bdevs_list": [ 00:06:59.873 { 00:06:59.873 "name": "BaseBdev1", 00:06:59.873 "uuid": "e13c0053-593c-5a64-aef7-24daf8c96d48", 00:06:59.873 "is_configured": true, 00:06:59.874 "data_offset": 2048, 00:06:59.874 "data_size": 63488 00:06:59.874 }, 00:06:59.874 { 00:06:59.874 "name": "BaseBdev2", 00:06:59.874 "uuid": "36ec3cf0-7cbb-5da0-99ee-5422da019d5c", 00:06:59.874 "is_configured": true, 00:06:59.874 "data_offset": 2048, 00:06:59.874 "data_size": 63488 00:06:59.874 } 00:06:59.874 ] 00:06:59.874 }' 00:06:59.874 20:02:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:59.874 20:02:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.133 20:02:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:00.133 20:02:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:00.393 [2024-12-08 20:02:32.116792] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:07:01.334 20:02:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:07:01.334 20:02:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.335 20:02:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.335 20:02:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.335 20:02:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:01.335 20:02:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:07:01.335 20:02:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:01.335 20:02:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:01.335 20:02:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:01.335 20:02:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:01.335 20:02:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:01.335 20:02:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:01.335 20:02:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:01.335 20:02:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:01.335 20:02:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:01.335 20:02:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:01.335 20:02:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:01.335 20:02:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:01.335 20:02:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:01.335 20:02:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.335 20:02:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.335 20:02:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.335 20:02:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:01.335 "name": "raid_bdev1", 00:07:01.335 "uuid": "592d0504-6822-4776-81e9-80e426a84286", 00:07:01.335 "strip_size_kb": 64, 00:07:01.335 "state": "online", 00:07:01.335 "raid_level": "raid0", 00:07:01.335 "superblock": true, 00:07:01.335 "num_base_bdevs": 2, 00:07:01.335 "num_base_bdevs_discovered": 2, 00:07:01.335 "num_base_bdevs_operational": 2, 00:07:01.335 "base_bdevs_list": [ 00:07:01.335 { 00:07:01.335 "name": "BaseBdev1", 00:07:01.335 "uuid": "e13c0053-593c-5a64-aef7-24daf8c96d48", 00:07:01.335 "is_configured": true, 00:07:01.335 "data_offset": 2048, 00:07:01.335 "data_size": 63488 00:07:01.335 }, 00:07:01.335 { 00:07:01.335 "name": "BaseBdev2", 00:07:01.335 "uuid": "36ec3cf0-7cbb-5da0-99ee-5422da019d5c", 00:07:01.335 "is_configured": true, 00:07:01.335 "data_offset": 2048, 00:07:01.335 "data_size": 63488 00:07:01.335 } 00:07:01.335 ] 00:07:01.335 }' 00:07:01.335 20:02:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:01.335 20:02:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.595 20:02:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:01.595 20:02:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.595 20:02:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.595 [2024-12-08 20:02:33.461468] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:01.595 [2024-12-08 20:02:33.461529] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:01.595 [2024-12-08 20:02:33.464200] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:01.595 [2024-12-08 20:02:33.464261] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:01.595 [2024-12-08 20:02:33.464300] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:01.595 [2024-12-08 20:02:33.464324] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:01.595 { 00:07:01.595 "results": [ 00:07:01.595 { 00:07:01.595 "job": "raid_bdev1", 00:07:01.595 "core_mask": "0x1", 00:07:01.595 "workload": "randrw", 00:07:01.595 "percentage": 50, 00:07:01.595 "status": "finished", 00:07:01.595 "queue_depth": 1, 00:07:01.595 "io_size": 131072, 00:07:01.595 "runtime": 1.345253, 00:07:01.595 "iops": 13504.522941037856, 00:07:01.595 "mibps": 1688.065367629732, 00:07:01.595 "io_failed": 1, 00:07:01.595 "io_timeout": 0, 00:07:01.595 "avg_latency_us": 103.82253705829531, 00:07:01.595 "min_latency_us": 26.941484716157206, 00:07:01.595 "max_latency_us": 1337.907423580786 00:07:01.595 } 00:07:01.595 ], 00:07:01.595 "core_count": 1 00:07:01.595 } 00:07:01.595 20:02:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.595 20:02:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61264 00:07:01.595 20:02:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 61264 ']' 00:07:01.595 20:02:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 61264 00:07:01.595 20:02:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:07:01.595 20:02:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:01.595 20:02:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61264 00:07:01.595 20:02:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:01.595 20:02:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:01.595 killing process with pid 61264 00:07:01.595 20:02:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61264' 00:07:01.595 20:02:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 61264 00:07:01.595 [2024-12-08 20:02:33.508426] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:01.595 20:02:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 61264 00:07:01.855 [2024-12-08 20:02:33.656811] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:03.249 20:02:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:03.249 20:02:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:03.250 20:02:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.w5XUr8kJEb 00:07:03.250 20:02:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:07:03.250 20:02:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:07:03.250 20:02:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:03.250 20:02:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:03.250 20:02:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:07:03.250 00:07:03.250 real 0m4.457s 00:07:03.250 user 0m5.212s 00:07:03.250 sys 0m0.632s 00:07:03.250 20:02:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:03.250 20:02:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.250 ************************************ 00:07:03.250 END TEST raid_read_error_test 00:07:03.250 ************************************ 00:07:03.250 20:02:34 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:07:03.250 20:02:34 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:03.250 20:02:34 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:03.250 20:02:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:03.250 ************************************ 00:07:03.250 START TEST raid_write_error_test 00:07:03.250 ************************************ 00:07:03.250 20:02:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 2 write 00:07:03.250 20:02:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:07:03.250 20:02:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:03.250 20:02:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:07:03.250 20:02:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:03.250 20:02:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:03.250 20:02:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:03.250 20:02:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:03.250 20:02:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:03.250 20:02:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:03.250 20:02:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:03.250 20:02:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:03.250 20:02:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:03.250 20:02:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:03.250 20:02:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:03.250 20:02:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:03.250 20:02:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:03.250 20:02:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:03.250 20:02:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:03.250 20:02:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:07:03.250 20:02:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:03.250 20:02:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:03.250 20:02:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:03.250 20:02:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.ABdmsXk3Ds 00:07:03.250 20:02:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61404 00:07:03.250 20:02:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:03.250 20:02:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61404 00:07:03.250 20:02:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 61404 ']' 00:07:03.250 20:02:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:03.250 20:02:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:03.250 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:03.250 20:02:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:03.250 20:02:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:03.250 20:02:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.250 [2024-12-08 20:02:35.093543] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:07:03.250 [2024-12-08 20:02:35.093659] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61404 ] 00:07:03.512 [2024-12-08 20:02:35.269896] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.512 [2024-12-08 20:02:35.408293] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.771 [2024-12-08 20:02:35.646103] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:03.771 [2024-12-08 20:02:35.646161] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:04.030 20:02:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:04.030 20:02:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:07:04.030 20:02:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:04.030 20:02:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:04.030 20:02:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.030 20:02:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:04.030 BaseBdev1_malloc 00:07:04.030 20:02:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.030 20:02:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:04.030 20:02:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.030 20:02:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:04.030 true 00:07:04.030 20:02:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.030 20:02:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:04.030 20:02:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.030 20:02:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:04.030 [2024-12-08 20:02:35.984170] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:04.030 [2024-12-08 20:02:35.984247] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:04.030 [2024-12-08 20:02:35.984285] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:04.030 [2024-12-08 20:02:35.984299] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:04.030 [2024-12-08 20:02:35.986516] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:04.030 [2024-12-08 20:02:35.986562] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:04.030 BaseBdev1 00:07:04.030 20:02:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.030 20:02:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:04.030 20:02:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:04.030 20:02:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.030 20:02:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:04.289 BaseBdev2_malloc 00:07:04.289 20:02:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.289 20:02:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:04.289 20:02:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.289 20:02:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:04.289 true 00:07:04.289 20:02:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.289 20:02:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:04.289 20:02:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.289 20:02:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:04.289 [2024-12-08 20:02:36.057321] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:04.289 [2024-12-08 20:02:36.057385] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:04.289 [2024-12-08 20:02:36.057414] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:04.289 [2024-12-08 20:02:36.057428] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:04.289 [2024-12-08 20:02:36.059862] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:04.289 [2024-12-08 20:02:36.059907] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:04.289 BaseBdev2 00:07:04.289 20:02:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.289 20:02:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:04.289 20:02:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.289 20:02:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:04.289 [2024-12-08 20:02:36.069359] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:04.289 [2024-12-08 20:02:36.071482] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:04.289 [2024-12-08 20:02:36.071701] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:04.289 [2024-12-08 20:02:36.071727] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:04.289 [2024-12-08 20:02:36.072038] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:07:04.289 [2024-12-08 20:02:36.072257] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:04.289 [2024-12-08 20:02:36.072279] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:04.289 [2024-12-08 20:02:36.072462] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:04.289 20:02:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.289 20:02:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:04.289 20:02:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:04.289 20:02:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:04.289 20:02:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:04.289 20:02:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:04.289 20:02:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:04.289 20:02:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:04.289 20:02:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:04.289 20:02:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:04.289 20:02:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:04.289 20:02:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:04.289 20:02:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.289 20:02:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:04.289 20:02:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:04.289 20:02:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.289 20:02:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:04.289 "name": "raid_bdev1", 00:07:04.289 "uuid": "f1941ffa-870a-4c3a-bff0-f905cb85c2d9", 00:07:04.289 "strip_size_kb": 64, 00:07:04.289 "state": "online", 00:07:04.289 "raid_level": "raid0", 00:07:04.289 "superblock": true, 00:07:04.289 "num_base_bdevs": 2, 00:07:04.289 "num_base_bdevs_discovered": 2, 00:07:04.289 "num_base_bdevs_operational": 2, 00:07:04.289 "base_bdevs_list": [ 00:07:04.289 { 00:07:04.289 "name": "BaseBdev1", 00:07:04.289 "uuid": "2d83c411-3fa1-545d-894d-3905a37ca872", 00:07:04.289 "is_configured": true, 00:07:04.289 "data_offset": 2048, 00:07:04.289 "data_size": 63488 00:07:04.289 }, 00:07:04.289 { 00:07:04.289 "name": "BaseBdev2", 00:07:04.289 "uuid": "2fa6bf75-b9b8-538b-8a6e-67183af7e04a", 00:07:04.289 "is_configured": true, 00:07:04.289 "data_offset": 2048, 00:07:04.289 "data_size": 63488 00:07:04.289 } 00:07:04.289 ] 00:07:04.289 }' 00:07:04.289 20:02:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:04.289 20:02:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:04.549 20:02:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:04.549 20:02:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:04.808 [2024-12-08 20:02:36.558035] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:07:05.806 20:02:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:07:05.806 20:02:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.806 20:02:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.806 20:02:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.806 20:02:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:05.806 20:02:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:07:05.806 20:02:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:05.806 20:02:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:05.806 20:02:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:05.806 20:02:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:05.806 20:02:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:05.806 20:02:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:05.806 20:02:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:05.806 20:02:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:05.806 20:02:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:05.806 20:02:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:05.806 20:02:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:05.806 20:02:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:05.806 20:02:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.806 20:02:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:05.806 20:02:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.806 20:02:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.806 20:02:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:05.807 "name": "raid_bdev1", 00:07:05.807 "uuid": "f1941ffa-870a-4c3a-bff0-f905cb85c2d9", 00:07:05.807 "strip_size_kb": 64, 00:07:05.807 "state": "online", 00:07:05.807 "raid_level": "raid0", 00:07:05.807 "superblock": true, 00:07:05.807 "num_base_bdevs": 2, 00:07:05.807 "num_base_bdevs_discovered": 2, 00:07:05.807 "num_base_bdevs_operational": 2, 00:07:05.807 "base_bdevs_list": [ 00:07:05.807 { 00:07:05.807 "name": "BaseBdev1", 00:07:05.807 "uuid": "2d83c411-3fa1-545d-894d-3905a37ca872", 00:07:05.807 "is_configured": true, 00:07:05.807 "data_offset": 2048, 00:07:05.807 "data_size": 63488 00:07:05.807 }, 00:07:05.807 { 00:07:05.807 "name": "BaseBdev2", 00:07:05.807 "uuid": "2fa6bf75-b9b8-538b-8a6e-67183af7e04a", 00:07:05.807 "is_configured": true, 00:07:05.807 "data_offset": 2048, 00:07:05.807 "data_size": 63488 00:07:05.807 } 00:07:05.807 ] 00:07:05.807 }' 00:07:05.807 20:02:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:05.807 20:02:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:06.066 20:02:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:06.066 20:02:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.066 20:02:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:06.066 [2024-12-08 20:02:37.980031] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:06.066 [2024-12-08 20:02:37.980096] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:06.066 [2024-12-08 20:02:37.982822] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:06.066 [2024-12-08 20:02:37.982878] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:06.066 [2024-12-08 20:02:37.982918] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:06.066 [2024-12-08 20:02:37.982938] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:06.066 { 00:07:06.066 "results": [ 00:07:06.066 { 00:07:06.066 "job": "raid_bdev1", 00:07:06.066 "core_mask": "0x1", 00:07:06.066 "workload": "randrw", 00:07:06.066 "percentage": 50, 00:07:06.066 "status": "finished", 00:07:06.066 "queue_depth": 1, 00:07:06.066 "io_size": 131072, 00:07:06.066 "runtime": 1.422883, 00:07:06.066 "iops": 13488.108298433532, 00:07:06.066 "mibps": 1686.0135373041915, 00:07:06.066 "io_failed": 1, 00:07:06.066 "io_timeout": 0, 00:07:06.066 "avg_latency_us": 104.11000917592546, 00:07:06.066 "min_latency_us": 25.9353711790393, 00:07:06.066 "max_latency_us": 1402.2986899563318 00:07:06.066 } 00:07:06.066 ], 00:07:06.066 "core_count": 1 00:07:06.066 } 00:07:06.066 20:02:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.066 20:02:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61404 00:07:06.066 20:02:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 61404 ']' 00:07:06.066 20:02:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 61404 00:07:06.066 20:02:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:07:06.066 20:02:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:06.066 20:02:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61404 00:07:06.066 20:02:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:06.066 20:02:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:06.066 20:02:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61404' 00:07:06.066 killing process with pid 61404 00:07:06.066 20:02:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 61404 00:07:06.066 [2024-12-08 20:02:38.038501] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:06.066 20:02:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 61404 00:07:06.326 [2024-12-08 20:02:38.185319] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:07.704 20:02:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:07.705 20:02:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:07.705 20:02:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.ABdmsXk3Ds 00:07:07.705 20:02:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.70 00:07:07.705 20:02:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:07:07.705 20:02:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:07.705 20:02:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:07.705 20:02:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.70 != \0\.\0\0 ]] 00:07:07.705 00:07:07.705 real 0m4.472s 00:07:07.705 user 0m5.207s 00:07:07.705 sys 0m0.625s 00:07:07.705 20:02:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:07.705 20:02:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.705 ************************************ 00:07:07.705 END TEST raid_write_error_test 00:07:07.705 ************************************ 00:07:07.705 20:02:39 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:07:07.705 20:02:39 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:07:07.705 20:02:39 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:07.705 20:02:39 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:07.705 20:02:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:07.705 ************************************ 00:07:07.705 START TEST raid_state_function_test 00:07:07.705 ************************************ 00:07:07.705 20:02:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 2 false 00:07:07.705 20:02:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:07:07.705 20:02:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:07.705 20:02:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:07:07.705 20:02:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:07.705 20:02:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:07.705 20:02:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:07.705 20:02:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:07.705 20:02:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:07.705 20:02:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:07.705 20:02:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:07.705 20:02:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:07.705 20:02:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:07.705 20:02:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:07.705 20:02:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:07.705 20:02:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:07.705 20:02:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:07.705 20:02:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:07.705 20:02:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:07.705 20:02:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:07:07.705 20:02:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:07.705 20:02:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:07.705 20:02:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:07:07.705 20:02:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:07:07.705 20:02:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=61542 00:07:07.705 20:02:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:07.705 20:02:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61542' 00:07:07.705 Process raid pid: 61542 00:07:07.705 20:02:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 61542 00:07:07.705 20:02:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 61542 ']' 00:07:07.705 20:02:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:07.705 20:02:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:07.705 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:07.705 20:02:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:07.705 20:02:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:07.705 20:02:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.705 [2024-12-08 20:02:39.626532] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:07:07.705 [2024-12-08 20:02:39.626651] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:07.964 [2024-12-08 20:02:39.800739] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.964 [2024-12-08 20:02:39.933728] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.222 [2024-12-08 20:02:40.172774] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:08.222 [2024-12-08 20:02:40.172848] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:08.481 20:02:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:08.481 20:02:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:07:08.481 20:02:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:08.481 20:02:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.481 20:02:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.481 [2024-12-08 20:02:40.451173] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:08.481 [2024-12-08 20:02:40.451267] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:08.481 [2024-12-08 20:02:40.451279] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:08.481 [2024-12-08 20:02:40.451293] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:08.481 20:02:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.481 20:02:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:08.481 20:02:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:08.481 20:02:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:08.481 20:02:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:08.481 20:02:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:08.481 20:02:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:08.481 20:02:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:08.739 20:02:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:08.739 20:02:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:08.739 20:02:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:08.739 20:02:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:08.739 20:02:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.739 20:02:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.739 20:02:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:08.739 20:02:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.739 20:02:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:08.739 "name": "Existed_Raid", 00:07:08.739 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:08.739 "strip_size_kb": 64, 00:07:08.739 "state": "configuring", 00:07:08.739 "raid_level": "concat", 00:07:08.739 "superblock": false, 00:07:08.739 "num_base_bdevs": 2, 00:07:08.739 "num_base_bdevs_discovered": 0, 00:07:08.739 "num_base_bdevs_operational": 2, 00:07:08.739 "base_bdevs_list": [ 00:07:08.739 { 00:07:08.739 "name": "BaseBdev1", 00:07:08.739 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:08.739 "is_configured": false, 00:07:08.739 "data_offset": 0, 00:07:08.739 "data_size": 0 00:07:08.739 }, 00:07:08.739 { 00:07:08.739 "name": "BaseBdev2", 00:07:08.739 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:08.739 "is_configured": false, 00:07:08.739 "data_offset": 0, 00:07:08.739 "data_size": 0 00:07:08.739 } 00:07:08.739 ] 00:07:08.739 }' 00:07:08.739 20:02:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:08.739 20:02:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.998 20:02:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:08.998 20:02:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.998 20:02:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.998 [2024-12-08 20:02:40.842514] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:08.998 [2024-12-08 20:02:40.842578] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:08.998 20:02:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.998 20:02:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:08.998 20:02:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.998 20:02:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.998 [2024-12-08 20:02:40.850468] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:08.998 [2024-12-08 20:02:40.850537] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:08.998 [2024-12-08 20:02:40.850549] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:08.998 [2024-12-08 20:02:40.850563] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:08.998 20:02:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.998 20:02:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:08.998 20:02:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.998 20:02:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.998 [2024-12-08 20:02:40.900444] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:08.998 BaseBdev1 00:07:08.998 20:02:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.998 20:02:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:08.998 20:02:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:08.998 20:02:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:08.998 20:02:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:08.998 20:02:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:08.998 20:02:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:08.998 20:02:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:08.998 20:02:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.998 20:02:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.998 20:02:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.998 20:02:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:08.998 20:02:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.998 20:02:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.998 [ 00:07:08.998 { 00:07:08.998 "name": "BaseBdev1", 00:07:08.998 "aliases": [ 00:07:08.998 "e99ac940-9aac-4ba3-81b7-e8c72032bf75" 00:07:08.998 ], 00:07:08.998 "product_name": "Malloc disk", 00:07:08.998 "block_size": 512, 00:07:08.998 "num_blocks": 65536, 00:07:08.998 "uuid": "e99ac940-9aac-4ba3-81b7-e8c72032bf75", 00:07:08.998 "assigned_rate_limits": { 00:07:08.998 "rw_ios_per_sec": 0, 00:07:08.998 "rw_mbytes_per_sec": 0, 00:07:08.998 "r_mbytes_per_sec": 0, 00:07:08.998 "w_mbytes_per_sec": 0 00:07:08.998 }, 00:07:08.998 "claimed": true, 00:07:08.998 "claim_type": "exclusive_write", 00:07:08.998 "zoned": false, 00:07:08.998 "supported_io_types": { 00:07:08.998 "read": true, 00:07:08.998 "write": true, 00:07:08.998 "unmap": true, 00:07:08.998 "flush": true, 00:07:08.998 "reset": true, 00:07:08.998 "nvme_admin": false, 00:07:08.998 "nvme_io": false, 00:07:08.998 "nvme_io_md": false, 00:07:08.998 "write_zeroes": true, 00:07:08.998 "zcopy": true, 00:07:08.998 "get_zone_info": false, 00:07:08.998 "zone_management": false, 00:07:08.998 "zone_append": false, 00:07:08.998 "compare": false, 00:07:08.998 "compare_and_write": false, 00:07:08.998 "abort": true, 00:07:08.998 "seek_hole": false, 00:07:08.998 "seek_data": false, 00:07:08.998 "copy": true, 00:07:08.998 "nvme_iov_md": false 00:07:08.998 }, 00:07:08.998 "memory_domains": [ 00:07:08.998 { 00:07:08.998 "dma_device_id": "system", 00:07:08.998 "dma_device_type": 1 00:07:08.998 }, 00:07:08.998 { 00:07:08.998 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:08.998 "dma_device_type": 2 00:07:08.998 } 00:07:08.998 ], 00:07:08.998 "driver_specific": {} 00:07:08.998 } 00:07:08.998 ] 00:07:08.998 20:02:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.998 20:02:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:08.998 20:02:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:08.998 20:02:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:08.998 20:02:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:08.998 20:02:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:08.998 20:02:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:08.998 20:02:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:08.998 20:02:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:08.998 20:02:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:08.998 20:02:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:08.998 20:02:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:08.998 20:02:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:08.998 20:02:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.998 20:02:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:08.998 20:02:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.998 20:02:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.255 20:02:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:09.255 "name": "Existed_Raid", 00:07:09.255 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:09.255 "strip_size_kb": 64, 00:07:09.255 "state": "configuring", 00:07:09.255 "raid_level": "concat", 00:07:09.255 "superblock": false, 00:07:09.255 "num_base_bdevs": 2, 00:07:09.255 "num_base_bdevs_discovered": 1, 00:07:09.255 "num_base_bdevs_operational": 2, 00:07:09.255 "base_bdevs_list": [ 00:07:09.255 { 00:07:09.255 "name": "BaseBdev1", 00:07:09.255 "uuid": "e99ac940-9aac-4ba3-81b7-e8c72032bf75", 00:07:09.255 "is_configured": true, 00:07:09.256 "data_offset": 0, 00:07:09.256 "data_size": 65536 00:07:09.256 }, 00:07:09.256 { 00:07:09.256 "name": "BaseBdev2", 00:07:09.256 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:09.256 "is_configured": false, 00:07:09.256 "data_offset": 0, 00:07:09.256 "data_size": 0 00:07:09.256 } 00:07:09.256 ] 00:07:09.256 }' 00:07:09.256 20:02:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:09.256 20:02:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.515 20:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:09.515 20:02:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.515 20:02:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.515 [2024-12-08 20:02:41.351811] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:09.515 [2024-12-08 20:02:41.351898] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:09.515 20:02:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.515 20:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:09.515 20:02:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.515 20:02:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.515 [2024-12-08 20:02:41.363795] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:09.515 [2024-12-08 20:02:41.365943] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:09.515 [2024-12-08 20:02:41.366008] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:09.515 20:02:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.515 20:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:09.515 20:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:09.515 20:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:09.515 20:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:09.515 20:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:09.515 20:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:09.515 20:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:09.515 20:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:09.515 20:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:09.515 20:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:09.515 20:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:09.515 20:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:09.515 20:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:09.515 20:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:09.515 20:02:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.515 20:02:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.515 20:02:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.515 20:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:09.515 "name": "Existed_Raid", 00:07:09.515 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:09.515 "strip_size_kb": 64, 00:07:09.515 "state": "configuring", 00:07:09.515 "raid_level": "concat", 00:07:09.515 "superblock": false, 00:07:09.515 "num_base_bdevs": 2, 00:07:09.515 "num_base_bdevs_discovered": 1, 00:07:09.515 "num_base_bdevs_operational": 2, 00:07:09.515 "base_bdevs_list": [ 00:07:09.515 { 00:07:09.515 "name": "BaseBdev1", 00:07:09.515 "uuid": "e99ac940-9aac-4ba3-81b7-e8c72032bf75", 00:07:09.515 "is_configured": true, 00:07:09.515 "data_offset": 0, 00:07:09.515 "data_size": 65536 00:07:09.515 }, 00:07:09.515 { 00:07:09.515 "name": "BaseBdev2", 00:07:09.515 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:09.515 "is_configured": false, 00:07:09.515 "data_offset": 0, 00:07:09.515 "data_size": 0 00:07:09.515 } 00:07:09.515 ] 00:07:09.515 }' 00:07:09.515 20:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:09.515 20:02:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.085 20:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:10.085 20:02:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.085 20:02:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.085 [2024-12-08 20:02:41.807425] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:10.085 [2024-12-08 20:02:41.807494] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:10.085 [2024-12-08 20:02:41.807503] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:10.085 [2024-12-08 20:02:41.807801] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:10.085 [2024-12-08 20:02:41.808167] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:10.085 [2024-12-08 20:02:41.808195] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:10.085 [2024-12-08 20:02:41.808527] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:10.085 BaseBdev2 00:07:10.085 20:02:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.085 20:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:10.085 20:02:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:10.085 20:02:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:10.085 20:02:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:10.085 20:02:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:10.085 20:02:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:10.085 20:02:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:10.085 20:02:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.086 20:02:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.086 20:02:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.086 20:02:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:10.086 20:02:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.086 20:02:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.086 [ 00:07:10.086 { 00:07:10.086 "name": "BaseBdev2", 00:07:10.086 "aliases": [ 00:07:10.086 "d5647fab-f327-4f36-9866-4bc23e8f3c9c" 00:07:10.086 ], 00:07:10.086 "product_name": "Malloc disk", 00:07:10.086 "block_size": 512, 00:07:10.086 "num_blocks": 65536, 00:07:10.086 "uuid": "d5647fab-f327-4f36-9866-4bc23e8f3c9c", 00:07:10.086 "assigned_rate_limits": { 00:07:10.086 "rw_ios_per_sec": 0, 00:07:10.086 "rw_mbytes_per_sec": 0, 00:07:10.086 "r_mbytes_per_sec": 0, 00:07:10.086 "w_mbytes_per_sec": 0 00:07:10.086 }, 00:07:10.086 "claimed": true, 00:07:10.086 "claim_type": "exclusive_write", 00:07:10.086 "zoned": false, 00:07:10.086 "supported_io_types": { 00:07:10.086 "read": true, 00:07:10.086 "write": true, 00:07:10.086 "unmap": true, 00:07:10.086 "flush": true, 00:07:10.086 "reset": true, 00:07:10.086 "nvme_admin": false, 00:07:10.086 "nvme_io": false, 00:07:10.086 "nvme_io_md": false, 00:07:10.086 "write_zeroes": true, 00:07:10.086 "zcopy": true, 00:07:10.086 "get_zone_info": false, 00:07:10.086 "zone_management": false, 00:07:10.086 "zone_append": false, 00:07:10.086 "compare": false, 00:07:10.086 "compare_and_write": false, 00:07:10.086 "abort": true, 00:07:10.086 "seek_hole": false, 00:07:10.086 "seek_data": false, 00:07:10.086 "copy": true, 00:07:10.086 "nvme_iov_md": false 00:07:10.086 }, 00:07:10.086 "memory_domains": [ 00:07:10.086 { 00:07:10.086 "dma_device_id": "system", 00:07:10.086 "dma_device_type": 1 00:07:10.086 }, 00:07:10.086 { 00:07:10.086 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:10.086 "dma_device_type": 2 00:07:10.086 } 00:07:10.086 ], 00:07:10.086 "driver_specific": {} 00:07:10.086 } 00:07:10.086 ] 00:07:10.086 20:02:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.086 20:02:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:10.086 20:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:10.086 20:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:10.086 20:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:07:10.086 20:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:10.086 20:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:10.086 20:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:10.086 20:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:10.086 20:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:10.086 20:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:10.086 20:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:10.086 20:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:10.086 20:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:10.086 20:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:10.086 20:02:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.086 20:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:10.086 20:02:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.086 20:02:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.086 20:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:10.086 "name": "Existed_Raid", 00:07:10.086 "uuid": "ff66b5f7-cdf4-4b97-b948-49f9c4a571c1", 00:07:10.086 "strip_size_kb": 64, 00:07:10.086 "state": "online", 00:07:10.086 "raid_level": "concat", 00:07:10.086 "superblock": false, 00:07:10.086 "num_base_bdevs": 2, 00:07:10.086 "num_base_bdevs_discovered": 2, 00:07:10.086 "num_base_bdevs_operational": 2, 00:07:10.086 "base_bdevs_list": [ 00:07:10.086 { 00:07:10.086 "name": "BaseBdev1", 00:07:10.086 "uuid": "e99ac940-9aac-4ba3-81b7-e8c72032bf75", 00:07:10.086 "is_configured": true, 00:07:10.086 "data_offset": 0, 00:07:10.086 "data_size": 65536 00:07:10.086 }, 00:07:10.086 { 00:07:10.086 "name": "BaseBdev2", 00:07:10.086 "uuid": "d5647fab-f327-4f36-9866-4bc23e8f3c9c", 00:07:10.086 "is_configured": true, 00:07:10.086 "data_offset": 0, 00:07:10.086 "data_size": 65536 00:07:10.086 } 00:07:10.086 ] 00:07:10.086 }' 00:07:10.086 20:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:10.086 20:02:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.346 20:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:10.346 20:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:10.346 20:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:10.346 20:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:10.347 20:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:10.347 20:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:10.347 20:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:10.347 20:02:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.347 20:02:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.347 20:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:10.347 [2024-12-08 20:02:42.247261] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:10.347 20:02:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.347 20:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:10.347 "name": "Existed_Raid", 00:07:10.347 "aliases": [ 00:07:10.347 "ff66b5f7-cdf4-4b97-b948-49f9c4a571c1" 00:07:10.347 ], 00:07:10.347 "product_name": "Raid Volume", 00:07:10.347 "block_size": 512, 00:07:10.347 "num_blocks": 131072, 00:07:10.347 "uuid": "ff66b5f7-cdf4-4b97-b948-49f9c4a571c1", 00:07:10.347 "assigned_rate_limits": { 00:07:10.347 "rw_ios_per_sec": 0, 00:07:10.347 "rw_mbytes_per_sec": 0, 00:07:10.347 "r_mbytes_per_sec": 0, 00:07:10.347 "w_mbytes_per_sec": 0 00:07:10.347 }, 00:07:10.347 "claimed": false, 00:07:10.347 "zoned": false, 00:07:10.347 "supported_io_types": { 00:07:10.347 "read": true, 00:07:10.347 "write": true, 00:07:10.347 "unmap": true, 00:07:10.347 "flush": true, 00:07:10.347 "reset": true, 00:07:10.347 "nvme_admin": false, 00:07:10.347 "nvme_io": false, 00:07:10.347 "nvme_io_md": false, 00:07:10.347 "write_zeroes": true, 00:07:10.347 "zcopy": false, 00:07:10.347 "get_zone_info": false, 00:07:10.347 "zone_management": false, 00:07:10.347 "zone_append": false, 00:07:10.347 "compare": false, 00:07:10.347 "compare_and_write": false, 00:07:10.347 "abort": false, 00:07:10.347 "seek_hole": false, 00:07:10.347 "seek_data": false, 00:07:10.347 "copy": false, 00:07:10.347 "nvme_iov_md": false 00:07:10.347 }, 00:07:10.347 "memory_domains": [ 00:07:10.347 { 00:07:10.347 "dma_device_id": "system", 00:07:10.347 "dma_device_type": 1 00:07:10.347 }, 00:07:10.347 { 00:07:10.347 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:10.347 "dma_device_type": 2 00:07:10.347 }, 00:07:10.347 { 00:07:10.347 "dma_device_id": "system", 00:07:10.347 "dma_device_type": 1 00:07:10.347 }, 00:07:10.347 { 00:07:10.347 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:10.347 "dma_device_type": 2 00:07:10.347 } 00:07:10.347 ], 00:07:10.347 "driver_specific": { 00:07:10.347 "raid": { 00:07:10.347 "uuid": "ff66b5f7-cdf4-4b97-b948-49f9c4a571c1", 00:07:10.347 "strip_size_kb": 64, 00:07:10.347 "state": "online", 00:07:10.347 "raid_level": "concat", 00:07:10.347 "superblock": false, 00:07:10.347 "num_base_bdevs": 2, 00:07:10.347 "num_base_bdevs_discovered": 2, 00:07:10.347 "num_base_bdevs_operational": 2, 00:07:10.347 "base_bdevs_list": [ 00:07:10.347 { 00:07:10.347 "name": "BaseBdev1", 00:07:10.347 "uuid": "e99ac940-9aac-4ba3-81b7-e8c72032bf75", 00:07:10.347 "is_configured": true, 00:07:10.347 "data_offset": 0, 00:07:10.347 "data_size": 65536 00:07:10.347 }, 00:07:10.347 { 00:07:10.347 "name": "BaseBdev2", 00:07:10.347 "uuid": "d5647fab-f327-4f36-9866-4bc23e8f3c9c", 00:07:10.347 "is_configured": true, 00:07:10.347 "data_offset": 0, 00:07:10.347 "data_size": 65536 00:07:10.347 } 00:07:10.347 ] 00:07:10.347 } 00:07:10.347 } 00:07:10.347 }' 00:07:10.347 20:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:10.347 20:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:10.347 BaseBdev2' 00:07:10.347 20:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:10.607 20:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:10.607 20:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:10.607 20:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:10.608 20:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:10.608 20:02:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.608 20:02:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.608 20:02:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.608 20:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:10.608 20:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:10.608 20:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:10.608 20:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:10.608 20:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:10.608 20:02:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.608 20:02:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.608 20:02:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.608 20:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:10.608 20:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:10.608 20:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:10.608 20:02:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.608 20:02:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.608 [2024-12-08 20:02:42.466638] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:10.608 [2024-12-08 20:02:42.466698] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:10.608 [2024-12-08 20:02:42.466766] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:10.608 20:02:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.608 20:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:10.608 20:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:07:10.608 20:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:10.608 20:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:10.608 20:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:10.608 20:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:07:10.608 20:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:10.608 20:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:10.608 20:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:10.608 20:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:10.608 20:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:10.608 20:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:10.608 20:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:10.608 20:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:10.608 20:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:10.608 20:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:10.608 20:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:10.608 20:02:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.608 20:02:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.867 20:02:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.867 20:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:10.867 "name": "Existed_Raid", 00:07:10.867 "uuid": "ff66b5f7-cdf4-4b97-b948-49f9c4a571c1", 00:07:10.867 "strip_size_kb": 64, 00:07:10.867 "state": "offline", 00:07:10.867 "raid_level": "concat", 00:07:10.867 "superblock": false, 00:07:10.867 "num_base_bdevs": 2, 00:07:10.867 "num_base_bdevs_discovered": 1, 00:07:10.867 "num_base_bdevs_operational": 1, 00:07:10.867 "base_bdevs_list": [ 00:07:10.867 { 00:07:10.867 "name": null, 00:07:10.867 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:10.867 "is_configured": false, 00:07:10.867 "data_offset": 0, 00:07:10.867 "data_size": 65536 00:07:10.867 }, 00:07:10.867 { 00:07:10.867 "name": "BaseBdev2", 00:07:10.867 "uuid": "d5647fab-f327-4f36-9866-4bc23e8f3c9c", 00:07:10.867 "is_configured": true, 00:07:10.867 "data_offset": 0, 00:07:10.867 "data_size": 65536 00:07:10.867 } 00:07:10.867 ] 00:07:10.867 }' 00:07:10.867 20:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:10.867 20:02:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.127 20:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:11.127 20:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:11.127 20:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:11.127 20:02:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.127 20:02:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.127 20:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:11.127 20:02:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.127 20:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:11.127 20:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:11.127 20:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:11.127 20:02:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.127 20:02:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.127 [2024-12-08 20:02:43.062533] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:11.127 [2024-12-08 20:02:43.062624] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:11.386 20:02:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.386 20:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:11.386 20:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:11.387 20:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:11.387 20:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:11.387 20:02:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.387 20:02:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.387 20:02:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.387 20:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:11.387 20:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:11.387 20:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:11.387 20:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 61542 00:07:11.387 20:02:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 61542 ']' 00:07:11.387 20:02:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 61542 00:07:11.387 20:02:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:07:11.387 20:02:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:11.387 20:02:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61542 00:07:11.387 20:02:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:11.387 20:02:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:11.387 killing process with pid 61542 00:07:11.387 20:02:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61542' 00:07:11.387 20:02:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 61542 00:07:11.387 [2024-12-08 20:02:43.251970] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:11.387 20:02:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 61542 00:07:11.387 [2024-12-08 20:02:43.269099] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:12.767 20:02:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:07:12.767 00:07:12.767 real 0m4.931s 00:07:12.767 user 0m6.900s 00:07:12.767 sys 0m0.854s 00:07:12.767 20:02:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:12.768 20:02:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.768 ************************************ 00:07:12.768 END TEST raid_state_function_test 00:07:12.768 ************************************ 00:07:12.768 20:02:44 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:07:12.768 20:02:44 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:12.768 20:02:44 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:12.768 20:02:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:12.768 ************************************ 00:07:12.768 START TEST raid_state_function_test_sb 00:07:12.768 ************************************ 00:07:12.768 20:02:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 2 true 00:07:12.768 20:02:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:07:12.768 20:02:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:12.768 20:02:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:07:12.768 20:02:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:12.768 20:02:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:12.768 20:02:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:12.768 20:02:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:12.768 20:02:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:12.768 20:02:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:12.768 20:02:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:12.768 20:02:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:12.768 20:02:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:12.768 20:02:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:12.768 20:02:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:12.768 20:02:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:12.768 20:02:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:12.768 20:02:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:12.768 20:02:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:12.768 20:02:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:07:12.768 20:02:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:12.768 20:02:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:12.768 20:02:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:07:12.768 20:02:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:07:12.768 20:02:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=61795 00:07:12.768 20:02:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:12.768 Process raid pid: 61795 00:07:12.768 20:02:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61795' 00:07:12.768 20:02:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 61795 00:07:12.768 20:02:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 61795 ']' 00:07:12.768 20:02:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:12.768 20:02:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:12.768 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:12.768 20:02:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:12.768 20:02:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:12.768 20:02:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:12.768 [2024-12-08 20:02:44.624283] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:07:12.768 [2024-12-08 20:02:44.624417] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:13.028 [2024-12-08 20:02:44.781469] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.028 [2024-12-08 20:02:44.921649] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.288 [2024-12-08 20:02:45.157239] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:13.288 [2024-12-08 20:02:45.157296] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:13.548 20:02:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:13.548 20:02:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:07:13.548 20:02:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:13.548 20:02:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:13.548 20:02:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:13.548 [2024-12-08 20:02:45.480604] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:13.548 [2024-12-08 20:02:45.480693] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:13.548 [2024-12-08 20:02:45.480712] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:13.548 [2024-12-08 20:02:45.480725] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:13.548 20:02:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:13.548 20:02:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:13.548 20:02:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:13.548 20:02:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:13.548 20:02:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:13.548 20:02:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:13.548 20:02:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:13.548 20:02:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:13.548 20:02:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:13.548 20:02:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:13.548 20:02:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:13.548 20:02:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:13.548 20:02:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:13.548 20:02:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:13.548 20:02:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:13.548 20:02:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:13.807 20:02:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:13.807 "name": "Existed_Raid", 00:07:13.807 "uuid": "04de2196-8c30-49b6-a230-e0109e7a22ca", 00:07:13.807 "strip_size_kb": 64, 00:07:13.807 "state": "configuring", 00:07:13.807 "raid_level": "concat", 00:07:13.807 "superblock": true, 00:07:13.807 "num_base_bdevs": 2, 00:07:13.807 "num_base_bdevs_discovered": 0, 00:07:13.807 "num_base_bdevs_operational": 2, 00:07:13.807 "base_bdevs_list": [ 00:07:13.807 { 00:07:13.807 "name": "BaseBdev1", 00:07:13.807 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:13.807 "is_configured": false, 00:07:13.807 "data_offset": 0, 00:07:13.807 "data_size": 0 00:07:13.807 }, 00:07:13.807 { 00:07:13.807 "name": "BaseBdev2", 00:07:13.807 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:13.807 "is_configured": false, 00:07:13.807 "data_offset": 0, 00:07:13.807 "data_size": 0 00:07:13.807 } 00:07:13.807 ] 00:07:13.807 }' 00:07:13.807 20:02:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:13.807 20:02:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:14.067 20:02:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:14.067 20:02:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:14.067 20:02:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:14.067 [2024-12-08 20:02:45.923844] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:14.067 [2024-12-08 20:02:45.923911] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:14.067 20:02:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:14.067 20:02:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:14.067 20:02:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:14.067 20:02:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:14.067 [2024-12-08 20:02:45.935774] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:14.067 [2024-12-08 20:02:45.935828] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:14.067 [2024-12-08 20:02:45.935839] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:14.067 [2024-12-08 20:02:45.935853] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:14.067 20:02:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:14.067 20:02:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:14.067 20:02:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:14.067 20:02:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:14.067 [2024-12-08 20:02:45.989813] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:14.067 BaseBdev1 00:07:14.067 20:02:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:14.067 20:02:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:14.067 20:02:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:14.067 20:02:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:14.067 20:02:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:14.067 20:02:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:14.067 20:02:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:14.067 20:02:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:14.067 20:02:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:14.067 20:02:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:14.067 20:02:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:14.067 20:02:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:14.067 20:02:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:14.067 20:02:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:14.067 [ 00:07:14.067 { 00:07:14.067 "name": "BaseBdev1", 00:07:14.067 "aliases": [ 00:07:14.067 "b1520ab5-1dc9-4b00-a1de-9e285528b3e1" 00:07:14.067 ], 00:07:14.067 "product_name": "Malloc disk", 00:07:14.067 "block_size": 512, 00:07:14.067 "num_blocks": 65536, 00:07:14.067 "uuid": "b1520ab5-1dc9-4b00-a1de-9e285528b3e1", 00:07:14.067 "assigned_rate_limits": { 00:07:14.067 "rw_ios_per_sec": 0, 00:07:14.067 "rw_mbytes_per_sec": 0, 00:07:14.067 "r_mbytes_per_sec": 0, 00:07:14.067 "w_mbytes_per_sec": 0 00:07:14.067 }, 00:07:14.067 "claimed": true, 00:07:14.067 "claim_type": "exclusive_write", 00:07:14.067 "zoned": false, 00:07:14.067 "supported_io_types": { 00:07:14.067 "read": true, 00:07:14.067 "write": true, 00:07:14.067 "unmap": true, 00:07:14.067 "flush": true, 00:07:14.067 "reset": true, 00:07:14.067 "nvme_admin": false, 00:07:14.067 "nvme_io": false, 00:07:14.067 "nvme_io_md": false, 00:07:14.067 "write_zeroes": true, 00:07:14.067 "zcopy": true, 00:07:14.067 "get_zone_info": false, 00:07:14.067 "zone_management": false, 00:07:14.067 "zone_append": false, 00:07:14.067 "compare": false, 00:07:14.067 "compare_and_write": false, 00:07:14.067 "abort": true, 00:07:14.067 "seek_hole": false, 00:07:14.067 "seek_data": false, 00:07:14.067 "copy": true, 00:07:14.067 "nvme_iov_md": false 00:07:14.067 }, 00:07:14.067 "memory_domains": [ 00:07:14.067 { 00:07:14.067 "dma_device_id": "system", 00:07:14.067 "dma_device_type": 1 00:07:14.067 }, 00:07:14.067 { 00:07:14.067 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:14.067 "dma_device_type": 2 00:07:14.067 } 00:07:14.067 ], 00:07:14.067 "driver_specific": {} 00:07:14.067 } 00:07:14.067 ] 00:07:14.067 20:02:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:14.067 20:02:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:14.067 20:02:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:14.067 20:02:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:14.067 20:02:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:14.067 20:02:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:14.067 20:02:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:14.067 20:02:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:14.067 20:02:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:14.067 20:02:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:14.067 20:02:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:14.067 20:02:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:14.068 20:02:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:14.068 20:02:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:14.068 20:02:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:14.068 20:02:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:14.068 20:02:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:14.340 20:02:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:14.340 "name": "Existed_Raid", 00:07:14.340 "uuid": "fb44204d-864a-4440-9f1d-25fafd37c9b7", 00:07:14.340 "strip_size_kb": 64, 00:07:14.340 "state": "configuring", 00:07:14.340 "raid_level": "concat", 00:07:14.340 "superblock": true, 00:07:14.340 "num_base_bdevs": 2, 00:07:14.340 "num_base_bdevs_discovered": 1, 00:07:14.340 "num_base_bdevs_operational": 2, 00:07:14.340 "base_bdevs_list": [ 00:07:14.340 { 00:07:14.340 "name": "BaseBdev1", 00:07:14.340 "uuid": "b1520ab5-1dc9-4b00-a1de-9e285528b3e1", 00:07:14.340 "is_configured": true, 00:07:14.340 "data_offset": 2048, 00:07:14.340 "data_size": 63488 00:07:14.340 }, 00:07:14.340 { 00:07:14.340 "name": "BaseBdev2", 00:07:14.340 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:14.340 "is_configured": false, 00:07:14.340 "data_offset": 0, 00:07:14.340 "data_size": 0 00:07:14.340 } 00:07:14.340 ] 00:07:14.340 }' 00:07:14.340 20:02:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:14.340 20:02:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:14.600 20:02:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:14.600 20:02:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:14.600 20:02:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:14.600 [2024-12-08 20:02:46.425155] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:14.600 [2024-12-08 20:02:46.425241] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:14.600 20:02:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:14.600 20:02:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:14.600 20:02:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:14.600 20:02:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:14.600 [2024-12-08 20:02:46.437178] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:14.600 [2024-12-08 20:02:46.439450] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:14.600 [2024-12-08 20:02:46.439505] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:14.600 20:02:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:14.600 20:02:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:14.600 20:02:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:14.600 20:02:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:14.600 20:02:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:14.600 20:02:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:14.600 20:02:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:14.600 20:02:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:14.600 20:02:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:14.600 20:02:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:14.600 20:02:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:14.600 20:02:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:14.600 20:02:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:14.600 20:02:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:14.600 20:02:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:14.600 20:02:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:14.600 20:02:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:14.600 20:02:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:14.600 20:02:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:14.600 "name": "Existed_Raid", 00:07:14.600 "uuid": "fb475fca-7d17-4e24-a826-c5079b389926", 00:07:14.600 "strip_size_kb": 64, 00:07:14.600 "state": "configuring", 00:07:14.600 "raid_level": "concat", 00:07:14.600 "superblock": true, 00:07:14.600 "num_base_bdevs": 2, 00:07:14.600 "num_base_bdevs_discovered": 1, 00:07:14.600 "num_base_bdevs_operational": 2, 00:07:14.600 "base_bdevs_list": [ 00:07:14.600 { 00:07:14.600 "name": "BaseBdev1", 00:07:14.600 "uuid": "b1520ab5-1dc9-4b00-a1de-9e285528b3e1", 00:07:14.600 "is_configured": true, 00:07:14.600 "data_offset": 2048, 00:07:14.600 "data_size": 63488 00:07:14.600 }, 00:07:14.600 { 00:07:14.600 "name": "BaseBdev2", 00:07:14.600 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:14.600 "is_configured": false, 00:07:14.600 "data_offset": 0, 00:07:14.600 "data_size": 0 00:07:14.600 } 00:07:14.600 ] 00:07:14.600 }' 00:07:14.600 20:02:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:14.600 20:02:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:14.860 20:02:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:14.860 20:02:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:14.860 20:02:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:15.123 [2024-12-08 20:02:46.873883] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:15.123 [2024-12-08 20:02:46.874226] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:15.123 [2024-12-08 20:02:46.874250] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:15.123 [2024-12-08 20:02:46.874614] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:15.123 BaseBdev2 00:07:15.123 [2024-12-08 20:02:46.874837] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:15.123 [2024-12-08 20:02:46.874864] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:15.123 [2024-12-08 20:02:46.875067] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:15.123 20:02:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.123 20:02:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:15.123 20:02:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:15.123 20:02:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:15.123 20:02:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:15.123 20:02:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:15.123 20:02:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:15.123 20:02:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:15.123 20:02:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.123 20:02:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:15.123 20:02:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.123 20:02:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:15.123 20:02:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.123 20:02:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:15.123 [ 00:07:15.123 { 00:07:15.123 "name": "BaseBdev2", 00:07:15.123 "aliases": [ 00:07:15.123 "25517558-0b63-4203-ab22-55bd680e0656" 00:07:15.123 ], 00:07:15.123 "product_name": "Malloc disk", 00:07:15.123 "block_size": 512, 00:07:15.123 "num_blocks": 65536, 00:07:15.123 "uuid": "25517558-0b63-4203-ab22-55bd680e0656", 00:07:15.123 "assigned_rate_limits": { 00:07:15.123 "rw_ios_per_sec": 0, 00:07:15.123 "rw_mbytes_per_sec": 0, 00:07:15.123 "r_mbytes_per_sec": 0, 00:07:15.123 "w_mbytes_per_sec": 0 00:07:15.123 }, 00:07:15.123 "claimed": true, 00:07:15.123 "claim_type": "exclusive_write", 00:07:15.123 "zoned": false, 00:07:15.123 "supported_io_types": { 00:07:15.123 "read": true, 00:07:15.123 "write": true, 00:07:15.123 "unmap": true, 00:07:15.123 "flush": true, 00:07:15.123 "reset": true, 00:07:15.123 "nvme_admin": false, 00:07:15.123 "nvme_io": false, 00:07:15.123 "nvme_io_md": false, 00:07:15.123 "write_zeroes": true, 00:07:15.123 "zcopy": true, 00:07:15.123 "get_zone_info": false, 00:07:15.123 "zone_management": false, 00:07:15.123 "zone_append": false, 00:07:15.123 "compare": false, 00:07:15.123 "compare_and_write": false, 00:07:15.123 "abort": true, 00:07:15.123 "seek_hole": false, 00:07:15.123 "seek_data": false, 00:07:15.123 "copy": true, 00:07:15.123 "nvme_iov_md": false 00:07:15.123 }, 00:07:15.123 "memory_domains": [ 00:07:15.123 { 00:07:15.123 "dma_device_id": "system", 00:07:15.123 "dma_device_type": 1 00:07:15.123 }, 00:07:15.123 { 00:07:15.123 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:15.123 "dma_device_type": 2 00:07:15.123 } 00:07:15.123 ], 00:07:15.123 "driver_specific": {} 00:07:15.123 } 00:07:15.123 ] 00:07:15.123 20:02:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.123 20:02:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:15.123 20:02:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:15.123 20:02:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:15.123 20:02:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:07:15.123 20:02:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:15.123 20:02:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:15.123 20:02:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:15.123 20:02:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:15.123 20:02:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:15.123 20:02:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:15.123 20:02:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:15.123 20:02:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:15.123 20:02:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:15.123 20:02:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:15.123 20:02:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.123 20:02:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:15.123 20:02:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:15.123 20:02:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.123 20:02:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:15.123 "name": "Existed_Raid", 00:07:15.123 "uuid": "fb475fca-7d17-4e24-a826-c5079b389926", 00:07:15.123 "strip_size_kb": 64, 00:07:15.123 "state": "online", 00:07:15.123 "raid_level": "concat", 00:07:15.123 "superblock": true, 00:07:15.123 "num_base_bdevs": 2, 00:07:15.123 "num_base_bdevs_discovered": 2, 00:07:15.123 "num_base_bdevs_operational": 2, 00:07:15.123 "base_bdevs_list": [ 00:07:15.123 { 00:07:15.123 "name": "BaseBdev1", 00:07:15.123 "uuid": "b1520ab5-1dc9-4b00-a1de-9e285528b3e1", 00:07:15.123 "is_configured": true, 00:07:15.123 "data_offset": 2048, 00:07:15.123 "data_size": 63488 00:07:15.123 }, 00:07:15.123 { 00:07:15.123 "name": "BaseBdev2", 00:07:15.123 "uuid": "25517558-0b63-4203-ab22-55bd680e0656", 00:07:15.123 "is_configured": true, 00:07:15.123 "data_offset": 2048, 00:07:15.123 "data_size": 63488 00:07:15.123 } 00:07:15.123 ] 00:07:15.123 }' 00:07:15.123 20:02:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:15.123 20:02:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:15.386 20:02:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:15.386 20:02:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:15.386 20:02:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:15.386 20:02:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:15.386 20:02:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:07:15.386 20:02:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:15.386 20:02:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:15.386 20:02:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:15.386 20:02:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.386 20:02:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:15.646 [2024-12-08 20:02:47.365424] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:15.646 20:02:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.646 20:02:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:15.646 "name": "Existed_Raid", 00:07:15.646 "aliases": [ 00:07:15.646 "fb475fca-7d17-4e24-a826-c5079b389926" 00:07:15.646 ], 00:07:15.646 "product_name": "Raid Volume", 00:07:15.646 "block_size": 512, 00:07:15.646 "num_blocks": 126976, 00:07:15.646 "uuid": "fb475fca-7d17-4e24-a826-c5079b389926", 00:07:15.646 "assigned_rate_limits": { 00:07:15.646 "rw_ios_per_sec": 0, 00:07:15.646 "rw_mbytes_per_sec": 0, 00:07:15.646 "r_mbytes_per_sec": 0, 00:07:15.646 "w_mbytes_per_sec": 0 00:07:15.646 }, 00:07:15.646 "claimed": false, 00:07:15.646 "zoned": false, 00:07:15.646 "supported_io_types": { 00:07:15.646 "read": true, 00:07:15.646 "write": true, 00:07:15.646 "unmap": true, 00:07:15.646 "flush": true, 00:07:15.646 "reset": true, 00:07:15.646 "nvme_admin": false, 00:07:15.646 "nvme_io": false, 00:07:15.646 "nvme_io_md": false, 00:07:15.646 "write_zeroes": true, 00:07:15.646 "zcopy": false, 00:07:15.646 "get_zone_info": false, 00:07:15.646 "zone_management": false, 00:07:15.646 "zone_append": false, 00:07:15.646 "compare": false, 00:07:15.646 "compare_and_write": false, 00:07:15.646 "abort": false, 00:07:15.646 "seek_hole": false, 00:07:15.646 "seek_data": false, 00:07:15.646 "copy": false, 00:07:15.646 "nvme_iov_md": false 00:07:15.646 }, 00:07:15.646 "memory_domains": [ 00:07:15.646 { 00:07:15.646 "dma_device_id": "system", 00:07:15.646 "dma_device_type": 1 00:07:15.646 }, 00:07:15.646 { 00:07:15.646 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:15.646 "dma_device_type": 2 00:07:15.646 }, 00:07:15.646 { 00:07:15.646 "dma_device_id": "system", 00:07:15.646 "dma_device_type": 1 00:07:15.646 }, 00:07:15.646 { 00:07:15.646 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:15.646 "dma_device_type": 2 00:07:15.646 } 00:07:15.646 ], 00:07:15.646 "driver_specific": { 00:07:15.646 "raid": { 00:07:15.646 "uuid": "fb475fca-7d17-4e24-a826-c5079b389926", 00:07:15.646 "strip_size_kb": 64, 00:07:15.646 "state": "online", 00:07:15.646 "raid_level": "concat", 00:07:15.646 "superblock": true, 00:07:15.646 "num_base_bdevs": 2, 00:07:15.646 "num_base_bdevs_discovered": 2, 00:07:15.646 "num_base_bdevs_operational": 2, 00:07:15.646 "base_bdevs_list": [ 00:07:15.646 { 00:07:15.646 "name": "BaseBdev1", 00:07:15.646 "uuid": "b1520ab5-1dc9-4b00-a1de-9e285528b3e1", 00:07:15.646 "is_configured": true, 00:07:15.646 "data_offset": 2048, 00:07:15.646 "data_size": 63488 00:07:15.646 }, 00:07:15.646 { 00:07:15.646 "name": "BaseBdev2", 00:07:15.646 "uuid": "25517558-0b63-4203-ab22-55bd680e0656", 00:07:15.646 "is_configured": true, 00:07:15.646 "data_offset": 2048, 00:07:15.646 "data_size": 63488 00:07:15.646 } 00:07:15.646 ] 00:07:15.646 } 00:07:15.646 } 00:07:15.646 }' 00:07:15.646 20:02:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:15.646 20:02:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:15.646 BaseBdev2' 00:07:15.646 20:02:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:15.646 20:02:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:15.646 20:02:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:15.646 20:02:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:15.646 20:02:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.646 20:02:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:15.646 20:02:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:15.646 20:02:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.646 20:02:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:15.646 20:02:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:15.646 20:02:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:15.646 20:02:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:15.646 20:02:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.646 20:02:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:15.646 20:02:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:15.646 20:02:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.646 20:02:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:15.646 20:02:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:15.646 20:02:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:15.646 20:02:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.646 20:02:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:15.646 [2024-12-08 20:02:47.588779] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:15.646 [2024-12-08 20:02:47.588836] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:15.646 [2024-12-08 20:02:47.588899] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:15.907 20:02:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.907 20:02:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:15.907 20:02:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:07:15.907 20:02:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:15.907 20:02:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:07:15.907 20:02:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:15.907 20:02:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:07:15.907 20:02:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:15.907 20:02:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:15.907 20:02:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:15.907 20:02:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:15.907 20:02:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:15.907 20:02:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:15.907 20:02:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:15.907 20:02:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:15.907 20:02:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:15.907 20:02:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:15.907 20:02:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.907 20:02:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:15.907 20:02:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:15.907 20:02:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.907 20:02:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:15.907 "name": "Existed_Raid", 00:07:15.907 "uuid": "fb475fca-7d17-4e24-a826-c5079b389926", 00:07:15.907 "strip_size_kb": 64, 00:07:15.907 "state": "offline", 00:07:15.907 "raid_level": "concat", 00:07:15.907 "superblock": true, 00:07:15.907 "num_base_bdevs": 2, 00:07:15.907 "num_base_bdevs_discovered": 1, 00:07:15.907 "num_base_bdevs_operational": 1, 00:07:15.907 "base_bdevs_list": [ 00:07:15.907 { 00:07:15.907 "name": null, 00:07:15.907 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:15.907 "is_configured": false, 00:07:15.907 "data_offset": 0, 00:07:15.907 "data_size": 63488 00:07:15.907 }, 00:07:15.907 { 00:07:15.907 "name": "BaseBdev2", 00:07:15.907 "uuid": "25517558-0b63-4203-ab22-55bd680e0656", 00:07:15.907 "is_configured": true, 00:07:15.907 "data_offset": 2048, 00:07:15.907 "data_size": 63488 00:07:15.907 } 00:07:15.907 ] 00:07:15.907 }' 00:07:15.907 20:02:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:15.907 20:02:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:16.220 20:02:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:16.220 20:02:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:16.220 20:02:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:16.220 20:02:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:16.220 20:02:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.220 20:02:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:16.220 20:02:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.528 20:02:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:16.528 20:02:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:16.528 20:02:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:16.528 20:02:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.528 20:02:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:16.528 [2024-12-08 20:02:48.176468] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:16.528 [2024-12-08 20:02:48.176561] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:16.528 20:02:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.528 20:02:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:16.528 20:02:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:16.528 20:02:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:16.528 20:02:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:16.528 20:02:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.528 20:02:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:16.528 20:02:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.528 20:02:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:16.528 20:02:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:16.528 20:02:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:16.528 20:02:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 61795 00:07:16.528 20:02:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 61795 ']' 00:07:16.528 20:02:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 61795 00:07:16.528 20:02:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:07:16.528 20:02:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:16.528 20:02:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61795 00:07:16.528 20:02:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:16.528 20:02:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:16.528 killing process with pid 61795 00:07:16.528 20:02:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61795' 00:07:16.528 20:02:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 61795 00:07:16.528 [2024-12-08 20:02:48.375941] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:16.528 20:02:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 61795 00:07:16.528 [2024-12-08 20:02:48.394403] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:17.913 20:02:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:07:17.913 00:07:17.913 real 0m5.061s 00:07:17.913 user 0m7.096s 00:07:17.913 sys 0m0.883s 00:07:17.913 20:02:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:17.913 20:02:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:17.913 ************************************ 00:07:17.913 END TEST raid_state_function_test_sb 00:07:17.913 ************************************ 00:07:17.913 20:02:49 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:07:17.913 20:02:49 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:17.913 20:02:49 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:17.913 20:02:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:17.913 ************************************ 00:07:17.913 START TEST raid_superblock_test 00:07:17.913 ************************************ 00:07:17.913 20:02:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 2 00:07:17.913 20:02:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:07:17.913 20:02:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:07:17.913 20:02:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:07:17.913 20:02:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:07:17.913 20:02:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:07:17.913 20:02:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:07:17.913 20:02:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:07:17.913 20:02:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:07:17.913 20:02:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:07:17.913 20:02:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:07:17.913 20:02:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:07:17.913 20:02:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:07:17.913 20:02:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:07:17.913 20:02:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:07:17.913 20:02:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:07:17.913 20:02:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:07:17.913 20:02:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=62047 00:07:17.913 20:02:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 62047 00:07:17.913 20:02:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:07:17.913 20:02:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 62047 ']' 00:07:17.913 20:02:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:17.913 20:02:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:17.913 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:17.913 20:02:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:17.913 20:02:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:17.913 20:02:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.913 [2024-12-08 20:02:49.758915] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:07:17.913 [2024-12-08 20:02:49.759204] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62047 ] 00:07:18.174 [2024-12-08 20:02:49.936146] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.174 [2024-12-08 20:02:50.078453] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.434 [2024-12-08 20:02:50.317280] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:18.434 [2024-12-08 20:02:50.317516] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:18.695 20:02:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:18.695 20:02:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:07:18.695 20:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:07:18.695 20:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:18.695 20:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:07:18.695 20:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:07:18.695 20:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:07:18.695 20:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:18.695 20:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:18.695 20:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:18.695 20:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:07:18.695 20:02:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.695 20:02:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.695 malloc1 00:07:18.695 20:02:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.695 20:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:18.695 20:02:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.695 20:02:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.695 [2024-12-08 20:02:50.626431] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:18.695 [2024-12-08 20:02:50.626623] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:18.695 [2024-12-08 20:02:50.626658] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:18.695 [2024-12-08 20:02:50.626671] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:18.695 [2024-12-08 20:02:50.629305] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:18.695 [2024-12-08 20:02:50.629351] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:18.695 pt1 00:07:18.695 20:02:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.695 20:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:18.695 20:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:18.695 20:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:07:18.695 20:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:07:18.695 20:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:07:18.695 20:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:18.695 20:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:18.695 20:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:18.695 20:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:07:18.695 20:02:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.695 20:02:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.956 malloc2 00:07:18.956 20:02:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.956 20:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:18.956 20:02:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.956 20:02:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.956 [2024-12-08 20:02:50.687084] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:18.956 [2024-12-08 20:02:50.687244] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:18.956 [2024-12-08 20:02:50.687296] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:07:18.956 [2024-12-08 20:02:50.687335] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:18.956 [2024-12-08 20:02:50.689780] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:18.956 [2024-12-08 20:02:50.689863] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:18.956 pt2 00:07:18.956 20:02:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.956 20:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:18.956 20:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:18.956 20:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:07:18.956 20:02:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.956 20:02:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.956 [2024-12-08 20:02:50.699160] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:18.956 [2024-12-08 20:02:50.701427] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:18.956 [2024-12-08 20:02:50.701686] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:18.956 [2024-12-08 20:02:50.701753] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:18.956 [2024-12-08 20:02:50.702091] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:18.956 [2024-12-08 20:02:50.702327] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:18.956 [2024-12-08 20:02:50.702377] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:07:18.956 [2024-12-08 20:02:50.702626] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:18.956 20:02:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.956 20:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:18.956 20:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:18.956 20:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:18.956 20:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:18.956 20:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:18.956 20:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:18.956 20:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:18.956 20:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:18.956 20:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:18.956 20:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:18.956 20:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:18.956 20:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:18.956 20:02:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.956 20:02:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.956 20:02:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.956 20:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:18.956 "name": "raid_bdev1", 00:07:18.956 "uuid": "19e15a5b-bf5a-46c6-8f54-e4c96d5e1092", 00:07:18.956 "strip_size_kb": 64, 00:07:18.956 "state": "online", 00:07:18.956 "raid_level": "concat", 00:07:18.956 "superblock": true, 00:07:18.956 "num_base_bdevs": 2, 00:07:18.956 "num_base_bdevs_discovered": 2, 00:07:18.956 "num_base_bdevs_operational": 2, 00:07:18.956 "base_bdevs_list": [ 00:07:18.956 { 00:07:18.956 "name": "pt1", 00:07:18.956 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:18.956 "is_configured": true, 00:07:18.956 "data_offset": 2048, 00:07:18.956 "data_size": 63488 00:07:18.956 }, 00:07:18.956 { 00:07:18.956 "name": "pt2", 00:07:18.956 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:18.956 "is_configured": true, 00:07:18.956 "data_offset": 2048, 00:07:18.956 "data_size": 63488 00:07:18.956 } 00:07:18.956 ] 00:07:18.956 }' 00:07:18.956 20:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:18.956 20:02:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.216 20:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:07:19.216 20:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:19.216 20:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:19.216 20:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:19.216 20:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:19.216 20:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:19.216 20:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:19.216 20:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:19.216 20:02:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.216 20:02:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.216 [2024-12-08 20:02:51.174745] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:19.476 20:02:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.476 20:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:19.476 "name": "raid_bdev1", 00:07:19.476 "aliases": [ 00:07:19.476 "19e15a5b-bf5a-46c6-8f54-e4c96d5e1092" 00:07:19.476 ], 00:07:19.476 "product_name": "Raid Volume", 00:07:19.476 "block_size": 512, 00:07:19.476 "num_blocks": 126976, 00:07:19.476 "uuid": "19e15a5b-bf5a-46c6-8f54-e4c96d5e1092", 00:07:19.476 "assigned_rate_limits": { 00:07:19.476 "rw_ios_per_sec": 0, 00:07:19.476 "rw_mbytes_per_sec": 0, 00:07:19.476 "r_mbytes_per_sec": 0, 00:07:19.476 "w_mbytes_per_sec": 0 00:07:19.476 }, 00:07:19.476 "claimed": false, 00:07:19.476 "zoned": false, 00:07:19.476 "supported_io_types": { 00:07:19.476 "read": true, 00:07:19.476 "write": true, 00:07:19.476 "unmap": true, 00:07:19.476 "flush": true, 00:07:19.476 "reset": true, 00:07:19.476 "nvme_admin": false, 00:07:19.476 "nvme_io": false, 00:07:19.476 "nvme_io_md": false, 00:07:19.476 "write_zeroes": true, 00:07:19.476 "zcopy": false, 00:07:19.476 "get_zone_info": false, 00:07:19.476 "zone_management": false, 00:07:19.476 "zone_append": false, 00:07:19.476 "compare": false, 00:07:19.476 "compare_and_write": false, 00:07:19.476 "abort": false, 00:07:19.476 "seek_hole": false, 00:07:19.476 "seek_data": false, 00:07:19.476 "copy": false, 00:07:19.476 "nvme_iov_md": false 00:07:19.476 }, 00:07:19.476 "memory_domains": [ 00:07:19.476 { 00:07:19.476 "dma_device_id": "system", 00:07:19.476 "dma_device_type": 1 00:07:19.476 }, 00:07:19.476 { 00:07:19.476 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:19.476 "dma_device_type": 2 00:07:19.476 }, 00:07:19.476 { 00:07:19.476 "dma_device_id": "system", 00:07:19.476 "dma_device_type": 1 00:07:19.476 }, 00:07:19.476 { 00:07:19.476 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:19.476 "dma_device_type": 2 00:07:19.476 } 00:07:19.476 ], 00:07:19.476 "driver_specific": { 00:07:19.476 "raid": { 00:07:19.476 "uuid": "19e15a5b-bf5a-46c6-8f54-e4c96d5e1092", 00:07:19.476 "strip_size_kb": 64, 00:07:19.476 "state": "online", 00:07:19.476 "raid_level": "concat", 00:07:19.476 "superblock": true, 00:07:19.476 "num_base_bdevs": 2, 00:07:19.476 "num_base_bdevs_discovered": 2, 00:07:19.476 "num_base_bdevs_operational": 2, 00:07:19.476 "base_bdevs_list": [ 00:07:19.476 { 00:07:19.476 "name": "pt1", 00:07:19.476 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:19.476 "is_configured": true, 00:07:19.476 "data_offset": 2048, 00:07:19.476 "data_size": 63488 00:07:19.476 }, 00:07:19.476 { 00:07:19.476 "name": "pt2", 00:07:19.476 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:19.476 "is_configured": true, 00:07:19.476 "data_offset": 2048, 00:07:19.476 "data_size": 63488 00:07:19.476 } 00:07:19.476 ] 00:07:19.476 } 00:07:19.476 } 00:07:19.476 }' 00:07:19.476 20:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:19.476 20:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:19.476 pt2' 00:07:19.476 20:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:19.476 20:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:19.476 20:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:19.476 20:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:19.476 20:02:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.476 20:02:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.476 20:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:19.476 20:02:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.476 20:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:19.476 20:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:19.476 20:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:19.476 20:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:19.476 20:02:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.476 20:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:19.476 20:02:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.476 20:02:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.476 20:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:19.476 20:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:19.476 20:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:07:19.476 20:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:19.476 20:02:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.476 20:02:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.477 [2024-12-08 20:02:51.414257] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:19.477 20:02:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.737 20:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=19e15a5b-bf5a-46c6-8f54-e4c96d5e1092 00:07:19.737 20:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 19e15a5b-bf5a-46c6-8f54-e4c96d5e1092 ']' 00:07:19.737 20:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:19.737 20:02:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.737 20:02:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.737 [2024-12-08 20:02:51.461880] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:19.737 [2024-12-08 20:02:51.461911] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:19.737 [2024-12-08 20:02:51.462023] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:19.737 [2024-12-08 20:02:51.462084] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:19.737 [2024-12-08 20:02:51.462099] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:07:19.737 20:02:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.737 20:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:19.737 20:02:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.737 20:02:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.737 20:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:07:19.737 20:02:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.737 20:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:07:19.737 20:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:07:19.737 20:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:19.737 20:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:07:19.737 20:02:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.737 20:02:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.737 20:02:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.737 20:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:19.737 20:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:07:19.737 20:02:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.737 20:02:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.737 20:02:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.737 20:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:07:19.737 20:02:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.737 20:02:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.737 20:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:07:19.737 20:02:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.737 20:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:07:19.737 20:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:19.737 20:02:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:07:19.737 20:02:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:19.737 20:02:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:19.737 20:02:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:19.737 20:02:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:19.737 20:02:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:19.737 20:02:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:19.737 20:02:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.737 20:02:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.737 [2024-12-08 20:02:51.589746] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:19.737 [2024-12-08 20:02:51.592050] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:19.737 [2024-12-08 20:02:51.592148] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:07:19.737 [2024-12-08 20:02:51.592218] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:07:19.737 [2024-12-08 20:02:51.592236] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:19.737 [2024-12-08 20:02:51.592250] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:07:19.737 request: 00:07:19.737 { 00:07:19.737 "name": "raid_bdev1", 00:07:19.737 "raid_level": "concat", 00:07:19.737 "base_bdevs": [ 00:07:19.737 "malloc1", 00:07:19.737 "malloc2" 00:07:19.737 ], 00:07:19.737 "strip_size_kb": 64, 00:07:19.737 "superblock": false, 00:07:19.737 "method": "bdev_raid_create", 00:07:19.737 "req_id": 1 00:07:19.737 } 00:07:19.737 Got JSON-RPC error response 00:07:19.737 response: 00:07:19.737 { 00:07:19.737 "code": -17, 00:07:19.737 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:07:19.737 } 00:07:19.737 20:02:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:19.737 20:02:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:07:19.737 20:02:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:19.737 20:02:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:19.737 20:02:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:19.737 20:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:07:19.737 20:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:19.737 20:02:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.737 20:02:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.737 20:02:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.737 20:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:07:19.737 20:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:07:19.737 20:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:19.737 20:02:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.737 20:02:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.737 [2024-12-08 20:02:51.637703] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:19.737 [2024-12-08 20:02:51.637912] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:19.737 [2024-12-08 20:02:51.637975] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:07:19.737 [2024-12-08 20:02:51.638026] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:19.738 [2024-12-08 20:02:51.640731] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:19.738 [2024-12-08 20:02:51.640847] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:19.738 [2024-12-08 20:02:51.641021] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:19.738 [2024-12-08 20:02:51.641161] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:19.738 pt1 00:07:19.738 20:02:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.738 20:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:07:19.738 20:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:19.738 20:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:19.738 20:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:19.738 20:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:19.738 20:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:19.738 20:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:19.738 20:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:19.738 20:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:19.738 20:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:19.738 20:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:19.738 20:02:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.738 20:02:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.738 20:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:19.738 20:02:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.738 20:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:19.738 "name": "raid_bdev1", 00:07:19.738 "uuid": "19e15a5b-bf5a-46c6-8f54-e4c96d5e1092", 00:07:19.738 "strip_size_kb": 64, 00:07:19.738 "state": "configuring", 00:07:19.738 "raid_level": "concat", 00:07:19.738 "superblock": true, 00:07:19.738 "num_base_bdevs": 2, 00:07:19.738 "num_base_bdevs_discovered": 1, 00:07:19.738 "num_base_bdevs_operational": 2, 00:07:19.738 "base_bdevs_list": [ 00:07:19.738 { 00:07:19.738 "name": "pt1", 00:07:19.738 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:19.738 "is_configured": true, 00:07:19.738 "data_offset": 2048, 00:07:19.738 "data_size": 63488 00:07:19.738 }, 00:07:19.738 { 00:07:19.738 "name": null, 00:07:19.738 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:19.738 "is_configured": false, 00:07:19.738 "data_offset": 2048, 00:07:19.738 "data_size": 63488 00:07:19.738 } 00:07:19.738 ] 00:07:19.738 }' 00:07:19.738 20:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:19.738 20:02:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.307 20:02:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:07:20.307 20:02:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:07:20.307 20:02:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:20.307 20:02:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:20.307 20:02:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.307 20:02:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.307 [2024-12-08 20:02:52.068974] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:20.307 [2024-12-08 20:02:52.069172] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:20.307 [2024-12-08 20:02:52.069264] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:07:20.307 [2024-12-08 20:02:52.069316] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:20.307 [2024-12-08 20:02:52.069983] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:20.307 [2024-12-08 20:02:52.070067] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:20.307 [2024-12-08 20:02:52.070258] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:20.307 [2024-12-08 20:02:52.070343] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:20.307 [2024-12-08 20:02:52.070529] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:20.307 [2024-12-08 20:02:52.070579] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:20.307 [2024-12-08 20:02:52.070884] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:07:20.307 [2024-12-08 20:02:52.071084] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:20.307 [2024-12-08 20:02:52.071096] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:20.307 [2024-12-08 20:02:52.071271] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:20.307 pt2 00:07:20.307 20:02:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.307 20:02:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:07:20.307 20:02:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:20.307 20:02:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:20.307 20:02:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:20.307 20:02:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:20.307 20:02:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:20.307 20:02:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:20.307 20:02:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:20.307 20:02:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:20.307 20:02:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:20.307 20:02:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:20.307 20:02:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:20.307 20:02:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:20.307 20:02:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:20.307 20:02:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.307 20:02:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.307 20:02:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.307 20:02:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:20.307 "name": "raid_bdev1", 00:07:20.307 "uuid": "19e15a5b-bf5a-46c6-8f54-e4c96d5e1092", 00:07:20.307 "strip_size_kb": 64, 00:07:20.307 "state": "online", 00:07:20.307 "raid_level": "concat", 00:07:20.307 "superblock": true, 00:07:20.308 "num_base_bdevs": 2, 00:07:20.308 "num_base_bdevs_discovered": 2, 00:07:20.308 "num_base_bdevs_operational": 2, 00:07:20.308 "base_bdevs_list": [ 00:07:20.308 { 00:07:20.308 "name": "pt1", 00:07:20.308 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:20.308 "is_configured": true, 00:07:20.308 "data_offset": 2048, 00:07:20.308 "data_size": 63488 00:07:20.308 }, 00:07:20.308 { 00:07:20.308 "name": "pt2", 00:07:20.308 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:20.308 "is_configured": true, 00:07:20.308 "data_offset": 2048, 00:07:20.308 "data_size": 63488 00:07:20.308 } 00:07:20.308 ] 00:07:20.308 }' 00:07:20.308 20:02:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:20.308 20:02:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.567 20:02:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:07:20.567 20:02:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:20.567 20:02:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:20.567 20:02:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:20.567 20:02:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:20.567 20:02:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:20.567 20:02:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:20.567 20:02:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.567 20:02:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.567 20:02:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:20.567 [2024-12-08 20:02:52.532435] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:20.567 20:02:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.827 20:02:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:20.827 "name": "raid_bdev1", 00:07:20.827 "aliases": [ 00:07:20.827 "19e15a5b-bf5a-46c6-8f54-e4c96d5e1092" 00:07:20.827 ], 00:07:20.827 "product_name": "Raid Volume", 00:07:20.827 "block_size": 512, 00:07:20.827 "num_blocks": 126976, 00:07:20.827 "uuid": "19e15a5b-bf5a-46c6-8f54-e4c96d5e1092", 00:07:20.827 "assigned_rate_limits": { 00:07:20.827 "rw_ios_per_sec": 0, 00:07:20.827 "rw_mbytes_per_sec": 0, 00:07:20.827 "r_mbytes_per_sec": 0, 00:07:20.827 "w_mbytes_per_sec": 0 00:07:20.827 }, 00:07:20.827 "claimed": false, 00:07:20.827 "zoned": false, 00:07:20.827 "supported_io_types": { 00:07:20.827 "read": true, 00:07:20.827 "write": true, 00:07:20.827 "unmap": true, 00:07:20.827 "flush": true, 00:07:20.827 "reset": true, 00:07:20.827 "nvme_admin": false, 00:07:20.827 "nvme_io": false, 00:07:20.827 "nvme_io_md": false, 00:07:20.827 "write_zeroes": true, 00:07:20.827 "zcopy": false, 00:07:20.827 "get_zone_info": false, 00:07:20.827 "zone_management": false, 00:07:20.827 "zone_append": false, 00:07:20.827 "compare": false, 00:07:20.827 "compare_and_write": false, 00:07:20.827 "abort": false, 00:07:20.827 "seek_hole": false, 00:07:20.827 "seek_data": false, 00:07:20.827 "copy": false, 00:07:20.827 "nvme_iov_md": false 00:07:20.827 }, 00:07:20.827 "memory_domains": [ 00:07:20.827 { 00:07:20.827 "dma_device_id": "system", 00:07:20.827 "dma_device_type": 1 00:07:20.827 }, 00:07:20.827 { 00:07:20.827 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:20.827 "dma_device_type": 2 00:07:20.827 }, 00:07:20.827 { 00:07:20.827 "dma_device_id": "system", 00:07:20.827 "dma_device_type": 1 00:07:20.827 }, 00:07:20.827 { 00:07:20.827 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:20.827 "dma_device_type": 2 00:07:20.827 } 00:07:20.827 ], 00:07:20.827 "driver_specific": { 00:07:20.827 "raid": { 00:07:20.827 "uuid": "19e15a5b-bf5a-46c6-8f54-e4c96d5e1092", 00:07:20.827 "strip_size_kb": 64, 00:07:20.827 "state": "online", 00:07:20.827 "raid_level": "concat", 00:07:20.827 "superblock": true, 00:07:20.827 "num_base_bdevs": 2, 00:07:20.827 "num_base_bdevs_discovered": 2, 00:07:20.827 "num_base_bdevs_operational": 2, 00:07:20.827 "base_bdevs_list": [ 00:07:20.827 { 00:07:20.827 "name": "pt1", 00:07:20.827 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:20.827 "is_configured": true, 00:07:20.827 "data_offset": 2048, 00:07:20.827 "data_size": 63488 00:07:20.827 }, 00:07:20.827 { 00:07:20.827 "name": "pt2", 00:07:20.827 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:20.827 "is_configured": true, 00:07:20.827 "data_offset": 2048, 00:07:20.827 "data_size": 63488 00:07:20.827 } 00:07:20.827 ] 00:07:20.827 } 00:07:20.827 } 00:07:20.827 }' 00:07:20.827 20:02:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:20.827 20:02:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:20.827 pt2' 00:07:20.827 20:02:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:20.827 20:02:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:20.827 20:02:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:20.827 20:02:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:20.827 20:02:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:20.827 20:02:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.827 20:02:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.827 20:02:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.827 20:02:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:20.827 20:02:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:20.827 20:02:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:20.827 20:02:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:20.827 20:02:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.827 20:02:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.827 20:02:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:20.827 20:02:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.827 20:02:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:20.827 20:02:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:20.827 20:02:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:20.827 20:02:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:07:20.827 20:02:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.827 20:02:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.827 [2024-12-08 20:02:52.768121] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:20.827 20:02:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.827 20:02:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 19e15a5b-bf5a-46c6-8f54-e4c96d5e1092 '!=' 19e15a5b-bf5a-46c6-8f54-e4c96d5e1092 ']' 00:07:20.827 20:02:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:07:20.827 20:02:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:20.827 20:02:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:20.827 20:02:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 62047 00:07:20.827 20:02:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 62047 ']' 00:07:20.827 20:02:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 62047 00:07:20.827 20:02:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:07:20.827 20:02:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:20.827 20:02:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62047 00:07:21.087 killing process with pid 62047 00:07:21.087 20:02:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:21.087 20:02:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:21.087 20:02:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62047' 00:07:21.087 20:02:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 62047 00:07:21.087 [2024-12-08 20:02:52.837220] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:21.087 20:02:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 62047 00:07:21.087 [2024-12-08 20:02:52.837365] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:21.087 [2024-12-08 20:02:52.837435] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:21.087 [2024-12-08 20:02:52.837452] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:21.346 [2024-12-08 20:02:53.071825] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:22.724 ************************************ 00:07:22.724 END TEST raid_superblock_test 00:07:22.724 ************************************ 00:07:22.725 20:02:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:07:22.725 00:07:22.725 real 0m4.633s 00:07:22.725 user 0m6.334s 00:07:22.725 sys 0m0.826s 00:07:22.725 20:02:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:22.725 20:02:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.725 20:02:54 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:07:22.725 20:02:54 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:22.725 20:02:54 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:22.725 20:02:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:22.725 ************************************ 00:07:22.725 START TEST raid_read_error_test 00:07:22.725 ************************************ 00:07:22.725 20:02:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 2 read 00:07:22.725 20:02:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:07:22.725 20:02:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:22.725 20:02:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:07:22.725 20:02:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:22.725 20:02:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:22.725 20:02:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:22.725 20:02:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:22.725 20:02:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:22.725 20:02:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:22.725 20:02:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:22.725 20:02:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:22.725 20:02:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:22.725 20:02:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:22.725 20:02:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:22.725 20:02:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:22.725 20:02:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:22.725 20:02:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:22.725 20:02:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:22.725 20:02:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:07:22.725 20:02:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:22.725 20:02:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:22.725 20:02:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:22.725 20:02:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.kwItF2UjGX 00:07:22.725 20:02:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62259 00:07:22.725 20:02:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:22.725 20:02:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62259 00:07:22.725 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:22.725 20:02:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 62259 ']' 00:07:22.725 20:02:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:22.725 20:02:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:22.725 20:02:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:22.725 20:02:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:22.725 20:02:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.725 [2024-12-08 20:02:54.462276] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:07:22.725 [2024-12-08 20:02:54.462389] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62259 ] 00:07:22.725 [2024-12-08 20:02:54.635779] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.984 [2024-12-08 20:02:54.774190] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.243 [2024-12-08 20:02:55.016119] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:23.243 [2024-12-08 20:02:55.016208] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:23.503 20:02:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:23.503 20:02:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:07:23.503 20:02:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:23.503 20:02:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:23.503 20:02:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.503 20:02:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.503 BaseBdev1_malloc 00:07:23.503 20:02:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.503 20:02:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:23.503 20:02:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.503 20:02:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.503 true 00:07:23.503 20:02:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.503 20:02:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:23.503 20:02:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.503 20:02:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.503 [2024-12-08 20:02:55.357914] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:23.503 [2024-12-08 20:02:55.358033] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:23.503 [2024-12-08 20:02:55.358063] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:23.503 [2024-12-08 20:02:55.358081] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:23.503 [2024-12-08 20:02:55.361070] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:23.503 [2024-12-08 20:02:55.361127] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:23.503 BaseBdev1 00:07:23.503 20:02:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.503 20:02:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:23.503 20:02:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:23.503 20:02:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.503 20:02:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.503 BaseBdev2_malloc 00:07:23.503 20:02:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.503 20:02:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:23.503 20:02:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.503 20:02:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.503 true 00:07:23.503 20:02:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.503 20:02:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:23.503 20:02:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.503 20:02:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.503 [2024-12-08 20:02:55.433439] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:23.503 [2024-12-08 20:02:55.433529] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:23.503 [2024-12-08 20:02:55.433552] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:23.503 [2024-12-08 20:02:55.433566] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:23.503 [2024-12-08 20:02:55.436170] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:23.503 [2024-12-08 20:02:55.436219] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:23.503 BaseBdev2 00:07:23.503 20:02:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.503 20:02:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:23.503 20:02:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.503 20:02:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.503 [2024-12-08 20:02:55.445531] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:23.503 [2024-12-08 20:02:55.447826] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:23.503 [2024-12-08 20:02:55.448230] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:23.503 [2024-12-08 20:02:55.448255] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:23.503 [2024-12-08 20:02:55.448592] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:07:23.503 [2024-12-08 20:02:55.448807] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:23.503 [2024-12-08 20:02:55.448822] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:23.503 [2024-12-08 20:02:55.449051] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:23.503 20:02:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.503 20:02:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:23.503 20:02:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:23.503 20:02:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:23.503 20:02:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:23.503 20:02:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:23.503 20:02:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:23.503 20:02:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:23.503 20:02:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:23.503 20:02:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:23.503 20:02:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:23.503 20:02:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:23.503 20:02:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:23.503 20:02:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.503 20:02:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.503 20:02:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.764 20:02:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:23.764 "name": "raid_bdev1", 00:07:23.764 "uuid": "3fcad17f-2bb7-4223-987c-3bd8af47a47f", 00:07:23.764 "strip_size_kb": 64, 00:07:23.764 "state": "online", 00:07:23.764 "raid_level": "concat", 00:07:23.764 "superblock": true, 00:07:23.764 "num_base_bdevs": 2, 00:07:23.764 "num_base_bdevs_discovered": 2, 00:07:23.764 "num_base_bdevs_operational": 2, 00:07:23.764 "base_bdevs_list": [ 00:07:23.764 { 00:07:23.764 "name": "BaseBdev1", 00:07:23.764 "uuid": "337b9118-82d1-525f-86df-f88ead58f9d7", 00:07:23.764 "is_configured": true, 00:07:23.764 "data_offset": 2048, 00:07:23.764 "data_size": 63488 00:07:23.764 }, 00:07:23.764 { 00:07:23.764 "name": "BaseBdev2", 00:07:23.764 "uuid": "8a3c90c1-01f2-5a77-871e-37a8bb51bc69", 00:07:23.764 "is_configured": true, 00:07:23.764 "data_offset": 2048, 00:07:23.764 "data_size": 63488 00:07:23.764 } 00:07:23.764 ] 00:07:23.764 }' 00:07:23.764 20:02:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:23.764 20:02:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.025 20:02:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:24.025 20:02:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:24.025 [2024-12-08 20:02:55.970366] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:07:24.964 20:02:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:07:24.964 20:02:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.964 20:02:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.964 20:02:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.964 20:02:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:24.964 20:02:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:07:24.964 20:02:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:24.964 20:02:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:24.964 20:02:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:24.964 20:02:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:24.964 20:02:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:24.964 20:02:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:24.964 20:02:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:24.964 20:02:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:24.964 20:02:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:24.964 20:02:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:24.964 20:02:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:24.964 20:02:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:24.964 20:02:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.964 20:02:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.964 20:02:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:24.964 20:02:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.224 20:02:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:25.224 "name": "raid_bdev1", 00:07:25.224 "uuid": "3fcad17f-2bb7-4223-987c-3bd8af47a47f", 00:07:25.224 "strip_size_kb": 64, 00:07:25.224 "state": "online", 00:07:25.224 "raid_level": "concat", 00:07:25.224 "superblock": true, 00:07:25.224 "num_base_bdevs": 2, 00:07:25.224 "num_base_bdevs_discovered": 2, 00:07:25.224 "num_base_bdevs_operational": 2, 00:07:25.224 "base_bdevs_list": [ 00:07:25.224 { 00:07:25.224 "name": "BaseBdev1", 00:07:25.224 "uuid": "337b9118-82d1-525f-86df-f88ead58f9d7", 00:07:25.224 "is_configured": true, 00:07:25.224 "data_offset": 2048, 00:07:25.224 "data_size": 63488 00:07:25.224 }, 00:07:25.224 { 00:07:25.224 "name": "BaseBdev2", 00:07:25.224 "uuid": "8a3c90c1-01f2-5a77-871e-37a8bb51bc69", 00:07:25.224 "is_configured": true, 00:07:25.224 "data_offset": 2048, 00:07:25.224 "data_size": 63488 00:07:25.224 } 00:07:25.224 ] 00:07:25.224 }' 00:07:25.224 20:02:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:25.224 20:02:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.484 20:02:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:25.484 20:02:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.484 20:02:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.484 [2024-12-08 20:02:57.331476] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:25.484 [2024-12-08 20:02:57.331667] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:25.484 [2024-12-08 20:02:57.334578] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:25.484 [2024-12-08 20:02:57.334637] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:25.484 [2024-12-08 20:02:57.334677] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:25.484 [2024-12-08 20:02:57.334696] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:25.484 { 00:07:25.484 "results": [ 00:07:25.484 { 00:07:25.484 "job": "raid_bdev1", 00:07:25.484 "core_mask": "0x1", 00:07:25.484 "workload": "randrw", 00:07:25.484 "percentage": 50, 00:07:25.484 "status": "finished", 00:07:25.484 "queue_depth": 1, 00:07:25.484 "io_size": 131072, 00:07:25.484 "runtime": 1.361686, 00:07:25.484 "iops": 13084.51434471677, 00:07:25.484 "mibps": 1635.5642930895963, 00:07:25.484 "io_failed": 1, 00:07:25.484 "io_timeout": 0, 00:07:25.484 "avg_latency_us": 107.13809194470436, 00:07:25.484 "min_latency_us": 28.17117903930131, 00:07:25.484 "max_latency_us": 1409.4532751091704 00:07:25.484 } 00:07:25.484 ], 00:07:25.484 "core_count": 1 00:07:25.484 } 00:07:25.484 20:02:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.484 20:02:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62259 00:07:25.484 20:02:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 62259 ']' 00:07:25.484 20:02:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 62259 00:07:25.484 20:02:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:07:25.484 20:02:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:25.484 20:02:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62259 00:07:25.484 killing process with pid 62259 00:07:25.484 20:02:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:25.484 20:02:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:25.484 20:02:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62259' 00:07:25.484 20:02:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 62259 00:07:25.484 [2024-12-08 20:02:57.378456] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:25.484 20:02:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 62259 00:07:25.744 [2024-12-08 20:02:57.528533] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:27.126 20:02:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:27.126 20:02:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.kwItF2UjGX 00:07:27.126 20:02:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:27.126 20:02:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:07:27.126 20:02:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:07:27.126 20:02:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:27.126 20:02:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:27.126 ************************************ 00:07:27.126 END TEST raid_read_error_test 00:07:27.126 ************************************ 00:07:27.126 20:02:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:07:27.126 00:07:27.126 real 0m4.518s 00:07:27.126 user 0m5.212s 00:07:27.126 sys 0m0.641s 00:07:27.126 20:02:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:27.126 20:02:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.126 20:02:58 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:07:27.126 20:02:58 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:27.126 20:02:58 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:27.126 20:02:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:27.126 ************************************ 00:07:27.126 START TEST raid_write_error_test 00:07:27.126 ************************************ 00:07:27.126 20:02:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 2 write 00:07:27.126 20:02:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:07:27.126 20:02:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:27.126 20:02:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:07:27.126 20:02:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:27.126 20:02:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:27.126 20:02:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:27.126 20:02:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:27.126 20:02:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:27.126 20:02:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:27.126 20:02:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:27.126 20:02:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:27.126 20:02:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:27.126 20:02:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:27.126 20:02:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:27.126 20:02:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:27.126 20:02:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:27.126 20:02:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:27.126 20:02:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:27.126 20:02:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:07:27.126 20:02:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:27.126 20:02:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:27.126 20:02:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:27.126 20:02:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.DRRUPwMxEi 00:07:27.126 20:02:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62403 00:07:27.126 20:02:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:27.126 20:02:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62403 00:07:27.126 20:02:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 62403 ']' 00:07:27.126 20:02:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:27.126 20:02:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:27.126 20:02:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:27.126 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:27.126 20:02:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:27.126 20:02:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.126 [2024-12-08 20:02:59.045974] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:07:27.126 [2024-12-08 20:02:59.046165] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62403 ] 00:07:27.386 [2024-12-08 20:02:59.222152] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.646 [2024-12-08 20:02:59.367731] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.646 [2024-12-08 20:02:59.596337] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:27.646 [2024-12-08 20:02:59.596495] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:27.906 20:02:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:27.906 20:02:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:07:27.906 20:02:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:27.906 20:02:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:27.906 20:02:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.906 20:02:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.167 BaseBdev1_malloc 00:07:28.167 20:02:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.167 20:02:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:28.167 20:02:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.168 20:02:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.168 true 00:07:28.168 20:02:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.168 20:02:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:28.168 20:02:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.168 20:02:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.168 [2024-12-08 20:02:59.925063] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:28.168 [2024-12-08 20:02:59.925119] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:28.168 [2024-12-08 20:02:59.925154] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:28.168 [2024-12-08 20:02:59.925165] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:28.168 [2024-12-08 20:02:59.927239] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:28.168 [2024-12-08 20:02:59.927280] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:28.168 BaseBdev1 00:07:28.168 20:02:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.168 20:02:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:28.168 20:02:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:28.168 20:02:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.168 20:02:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.168 BaseBdev2_malloc 00:07:28.168 20:02:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.168 20:02:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:28.168 20:02:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.168 20:02:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.168 true 00:07:28.168 20:02:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.168 20:02:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:28.168 20:02:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.168 20:02:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.168 [2024-12-08 20:02:59.988198] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:28.168 [2024-12-08 20:02:59.988249] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:28.168 [2024-12-08 20:02:59.988266] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:28.168 [2024-12-08 20:02:59.988277] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:28.168 [2024-12-08 20:02:59.990385] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:28.168 [2024-12-08 20:02:59.990466] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:28.168 BaseBdev2 00:07:28.168 20:02:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.168 20:02:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:28.168 20:02:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.168 20:02:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.168 [2024-12-08 20:03:00.000243] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:28.168 [2024-12-08 20:03:00.002111] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:28.168 [2024-12-08 20:03:00.002311] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:28.168 [2024-12-08 20:03:00.002327] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:28.168 [2024-12-08 20:03:00.002574] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:07:28.168 [2024-12-08 20:03:00.002739] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:28.168 [2024-12-08 20:03:00.002751] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:28.168 [2024-12-08 20:03:00.002898] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:28.168 20:03:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.168 20:03:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:28.168 20:03:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:28.168 20:03:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:28.168 20:03:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:28.168 20:03:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:28.168 20:03:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:28.168 20:03:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:28.168 20:03:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:28.168 20:03:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:28.168 20:03:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:28.168 20:03:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:28.168 20:03:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.168 20:03:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.168 20:03:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:28.168 20:03:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.168 20:03:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:28.168 "name": "raid_bdev1", 00:07:28.168 "uuid": "498686be-610e-4ffe-8ef5-9f4a724c1448", 00:07:28.168 "strip_size_kb": 64, 00:07:28.168 "state": "online", 00:07:28.168 "raid_level": "concat", 00:07:28.168 "superblock": true, 00:07:28.168 "num_base_bdevs": 2, 00:07:28.168 "num_base_bdevs_discovered": 2, 00:07:28.168 "num_base_bdevs_operational": 2, 00:07:28.168 "base_bdevs_list": [ 00:07:28.168 { 00:07:28.168 "name": "BaseBdev1", 00:07:28.168 "uuid": "568937c9-12d4-51ce-8b75-219a956875c1", 00:07:28.168 "is_configured": true, 00:07:28.168 "data_offset": 2048, 00:07:28.168 "data_size": 63488 00:07:28.168 }, 00:07:28.168 { 00:07:28.168 "name": "BaseBdev2", 00:07:28.168 "uuid": "c6181f41-0691-5c7e-a144-23af5eab6183", 00:07:28.168 "is_configured": true, 00:07:28.168 "data_offset": 2048, 00:07:28.168 "data_size": 63488 00:07:28.168 } 00:07:28.168 ] 00:07:28.168 }' 00:07:28.168 20:03:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:28.168 20:03:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.739 20:03:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:28.739 20:03:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:28.739 [2024-12-08 20:03:00.556883] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:07:29.678 20:03:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:07:29.678 20:03:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.678 20:03:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.678 20:03:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.678 20:03:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:29.678 20:03:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:07:29.678 20:03:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:29.678 20:03:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:29.678 20:03:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:29.678 20:03:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:29.678 20:03:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:29.678 20:03:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:29.678 20:03:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:29.678 20:03:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:29.679 20:03:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:29.679 20:03:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:29.679 20:03:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:29.679 20:03:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:29.679 20:03:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:29.679 20:03:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.679 20:03:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.679 20:03:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.679 20:03:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:29.679 "name": "raid_bdev1", 00:07:29.679 "uuid": "498686be-610e-4ffe-8ef5-9f4a724c1448", 00:07:29.679 "strip_size_kb": 64, 00:07:29.679 "state": "online", 00:07:29.679 "raid_level": "concat", 00:07:29.679 "superblock": true, 00:07:29.679 "num_base_bdevs": 2, 00:07:29.679 "num_base_bdevs_discovered": 2, 00:07:29.679 "num_base_bdevs_operational": 2, 00:07:29.679 "base_bdevs_list": [ 00:07:29.679 { 00:07:29.679 "name": "BaseBdev1", 00:07:29.679 "uuid": "568937c9-12d4-51ce-8b75-219a956875c1", 00:07:29.679 "is_configured": true, 00:07:29.679 "data_offset": 2048, 00:07:29.679 "data_size": 63488 00:07:29.679 }, 00:07:29.679 { 00:07:29.679 "name": "BaseBdev2", 00:07:29.679 "uuid": "c6181f41-0691-5c7e-a144-23af5eab6183", 00:07:29.679 "is_configured": true, 00:07:29.679 "data_offset": 2048, 00:07:29.679 "data_size": 63488 00:07:29.679 } 00:07:29.679 ] 00:07:29.679 }' 00:07:29.679 20:03:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:29.679 20:03:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.938 20:03:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:29.938 20:03:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.938 20:03:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.938 [2024-12-08 20:03:01.911138] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:29.938 [2024-12-08 20:03:01.911277] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:29.938 [2024-12-08 20:03:01.914386] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:29.939 [2024-12-08 20:03:01.914471] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:29.939 [2024-12-08 20:03:01.914510] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:29.939 [2024-12-08 20:03:01.914525] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:30.198 { 00:07:30.198 "results": [ 00:07:30.198 { 00:07:30.198 "job": "raid_bdev1", 00:07:30.198 "core_mask": "0x1", 00:07:30.198 "workload": "randrw", 00:07:30.198 "percentage": 50, 00:07:30.198 "status": "finished", 00:07:30.198 "queue_depth": 1, 00:07:30.198 "io_size": 131072, 00:07:30.198 "runtime": 1.35536, 00:07:30.198 "iops": 15753.010270334082, 00:07:30.198 "mibps": 1969.1262837917602, 00:07:30.198 "io_failed": 1, 00:07:30.198 "io_timeout": 0, 00:07:30.198 "avg_latency_us": 87.89614153118205, 00:07:30.198 "min_latency_us": 26.829694323144103, 00:07:30.198 "max_latency_us": 1380.8349344978167 00:07:30.198 } 00:07:30.198 ], 00:07:30.198 "core_count": 1 00:07:30.198 } 00:07:30.198 20:03:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.198 20:03:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62403 00:07:30.198 20:03:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 62403 ']' 00:07:30.198 20:03:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 62403 00:07:30.198 20:03:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:07:30.198 20:03:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:30.198 20:03:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62403 00:07:30.198 killing process with pid 62403 00:07:30.198 20:03:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:30.198 20:03:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:30.198 20:03:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62403' 00:07:30.198 20:03:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 62403 00:07:30.198 [2024-12-08 20:03:01.958988] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:30.198 20:03:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 62403 00:07:30.198 [2024-12-08 20:03:02.093503] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:31.579 20:03:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:31.579 20:03:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:31.579 20:03:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.DRRUPwMxEi 00:07:31.579 20:03:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:07:31.579 20:03:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:07:31.579 20:03:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:31.579 20:03:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:31.579 20:03:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:07:31.579 00:07:31.579 real 0m4.334s 00:07:31.579 user 0m5.145s 00:07:31.579 sys 0m0.559s 00:07:31.579 20:03:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:31.579 20:03:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.579 ************************************ 00:07:31.579 END TEST raid_write_error_test 00:07:31.579 ************************************ 00:07:31.579 20:03:03 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:07:31.579 20:03:03 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:07:31.579 20:03:03 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:31.579 20:03:03 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:31.579 20:03:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:31.579 ************************************ 00:07:31.579 START TEST raid_state_function_test 00:07:31.579 ************************************ 00:07:31.579 20:03:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 false 00:07:31.579 20:03:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:07:31.579 20:03:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:31.579 20:03:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:07:31.579 20:03:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:31.579 20:03:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:31.579 20:03:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:31.579 20:03:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:31.579 20:03:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:31.579 20:03:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:31.579 20:03:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:31.579 20:03:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:31.579 20:03:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:31.579 20:03:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:31.579 20:03:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:31.579 20:03:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:31.579 20:03:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:31.579 20:03:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:31.579 20:03:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:31.579 20:03:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:07:31.579 20:03:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:07:31.579 20:03:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:07:31.579 20:03:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:07:31.579 20:03:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=62547 00:07:31.579 20:03:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:31.579 Process raid pid: 62547 00:07:31.579 20:03:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62547' 00:07:31.579 20:03:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 62547 00:07:31.579 20:03:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 62547 ']' 00:07:31.579 20:03:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:31.579 20:03:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:31.579 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:31.579 20:03:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:31.579 20:03:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:31.579 20:03:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.579 [2024-12-08 20:03:03.436723] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:07:31.579 [2024-12-08 20:03:03.436855] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:31.839 [2024-12-08 20:03:03.609865] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.839 [2024-12-08 20:03:03.723367] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.099 [2024-12-08 20:03:03.927221] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:32.099 [2024-12-08 20:03:03.927260] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:32.359 20:03:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:32.359 20:03:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:07:32.359 20:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:32.359 20:03:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.359 20:03:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.359 [2024-12-08 20:03:04.257905] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:32.359 [2024-12-08 20:03:04.257985] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:32.359 [2024-12-08 20:03:04.257999] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:32.359 [2024-12-08 20:03:04.258009] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:32.359 20:03:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.359 20:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:32.359 20:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:32.359 20:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:32.359 20:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:32.359 20:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:32.359 20:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:32.359 20:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:32.359 20:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:32.359 20:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:32.359 20:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:32.359 20:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:32.359 20:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:32.359 20:03:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.359 20:03:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.359 20:03:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.359 20:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:32.359 "name": "Existed_Raid", 00:07:32.359 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:32.359 "strip_size_kb": 0, 00:07:32.359 "state": "configuring", 00:07:32.359 "raid_level": "raid1", 00:07:32.359 "superblock": false, 00:07:32.359 "num_base_bdevs": 2, 00:07:32.359 "num_base_bdevs_discovered": 0, 00:07:32.359 "num_base_bdevs_operational": 2, 00:07:32.359 "base_bdevs_list": [ 00:07:32.359 { 00:07:32.359 "name": "BaseBdev1", 00:07:32.359 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:32.359 "is_configured": false, 00:07:32.359 "data_offset": 0, 00:07:32.359 "data_size": 0 00:07:32.359 }, 00:07:32.359 { 00:07:32.359 "name": "BaseBdev2", 00:07:32.359 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:32.359 "is_configured": false, 00:07:32.359 "data_offset": 0, 00:07:32.359 "data_size": 0 00:07:32.359 } 00:07:32.359 ] 00:07:32.359 }' 00:07:32.359 20:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:32.359 20:03:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.929 20:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:32.929 20:03:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.929 20:03:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.929 [2024-12-08 20:03:04.685227] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:32.929 [2024-12-08 20:03:04.685292] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:32.929 20:03:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.929 20:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:32.929 20:03:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.929 20:03:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.929 [2024-12-08 20:03:04.697152] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:32.929 [2024-12-08 20:03:04.697201] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:32.929 [2024-12-08 20:03:04.697212] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:32.929 [2024-12-08 20:03:04.697227] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:32.929 20:03:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.929 20:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:32.929 20:03:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.929 20:03:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.929 [2024-12-08 20:03:04.749800] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:32.929 BaseBdev1 00:07:32.929 20:03:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.929 20:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:32.929 20:03:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:32.929 20:03:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:32.929 20:03:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:32.929 20:03:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:32.929 20:03:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:32.929 20:03:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:32.929 20:03:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.929 20:03:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.929 20:03:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.929 20:03:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:32.929 20:03:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.929 20:03:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.929 [ 00:07:32.929 { 00:07:32.929 "name": "BaseBdev1", 00:07:32.929 "aliases": [ 00:07:32.929 "68421d7b-c37c-44cb-ac42-36a9cc07988c" 00:07:32.929 ], 00:07:32.929 "product_name": "Malloc disk", 00:07:32.929 "block_size": 512, 00:07:32.929 "num_blocks": 65536, 00:07:32.929 "uuid": "68421d7b-c37c-44cb-ac42-36a9cc07988c", 00:07:32.929 "assigned_rate_limits": { 00:07:32.929 "rw_ios_per_sec": 0, 00:07:32.929 "rw_mbytes_per_sec": 0, 00:07:32.929 "r_mbytes_per_sec": 0, 00:07:32.929 "w_mbytes_per_sec": 0 00:07:32.929 }, 00:07:32.929 "claimed": true, 00:07:32.929 "claim_type": "exclusive_write", 00:07:32.929 "zoned": false, 00:07:32.929 "supported_io_types": { 00:07:32.929 "read": true, 00:07:32.929 "write": true, 00:07:32.929 "unmap": true, 00:07:32.929 "flush": true, 00:07:32.929 "reset": true, 00:07:32.929 "nvme_admin": false, 00:07:32.929 "nvme_io": false, 00:07:32.929 "nvme_io_md": false, 00:07:32.929 "write_zeroes": true, 00:07:32.929 "zcopy": true, 00:07:32.929 "get_zone_info": false, 00:07:32.929 "zone_management": false, 00:07:32.929 "zone_append": false, 00:07:32.929 "compare": false, 00:07:32.929 "compare_and_write": false, 00:07:32.929 "abort": true, 00:07:32.929 "seek_hole": false, 00:07:32.929 "seek_data": false, 00:07:32.929 "copy": true, 00:07:32.929 "nvme_iov_md": false 00:07:32.929 }, 00:07:32.929 "memory_domains": [ 00:07:32.929 { 00:07:32.929 "dma_device_id": "system", 00:07:32.929 "dma_device_type": 1 00:07:32.929 }, 00:07:32.929 { 00:07:32.929 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:32.929 "dma_device_type": 2 00:07:32.929 } 00:07:32.929 ], 00:07:32.930 "driver_specific": {} 00:07:32.930 } 00:07:32.930 ] 00:07:32.930 20:03:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.930 20:03:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:32.930 20:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:32.930 20:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:32.930 20:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:32.930 20:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:32.930 20:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:32.930 20:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:32.930 20:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:32.930 20:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:32.930 20:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:32.930 20:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:32.930 20:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:32.930 20:03:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.930 20:03:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.930 20:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:32.930 20:03:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.930 20:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:32.930 "name": "Existed_Raid", 00:07:32.930 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:32.930 "strip_size_kb": 0, 00:07:32.930 "state": "configuring", 00:07:32.930 "raid_level": "raid1", 00:07:32.930 "superblock": false, 00:07:32.930 "num_base_bdevs": 2, 00:07:32.930 "num_base_bdevs_discovered": 1, 00:07:32.930 "num_base_bdevs_operational": 2, 00:07:32.930 "base_bdevs_list": [ 00:07:32.930 { 00:07:32.930 "name": "BaseBdev1", 00:07:32.930 "uuid": "68421d7b-c37c-44cb-ac42-36a9cc07988c", 00:07:32.930 "is_configured": true, 00:07:32.930 "data_offset": 0, 00:07:32.930 "data_size": 65536 00:07:32.930 }, 00:07:32.930 { 00:07:32.930 "name": "BaseBdev2", 00:07:32.930 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:32.930 "is_configured": false, 00:07:32.930 "data_offset": 0, 00:07:32.930 "data_size": 0 00:07:32.930 } 00:07:32.930 ] 00:07:32.930 }' 00:07:32.930 20:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:32.930 20:03:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.509 20:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:33.509 20:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.509 20:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.509 [2024-12-08 20:03:05.241053] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:33.509 [2024-12-08 20:03:05.241109] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:33.509 20:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.509 20:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:33.509 20:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.509 20:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.509 [2024-12-08 20:03:05.249085] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:33.509 [2024-12-08 20:03:05.251159] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:33.509 [2024-12-08 20:03:05.251221] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:33.509 20:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.509 20:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:33.509 20:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:33.509 20:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:33.509 20:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:33.509 20:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:33.509 20:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:33.509 20:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:33.509 20:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:33.509 20:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:33.509 20:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:33.510 20:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:33.510 20:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:33.510 20:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:33.510 20:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:33.510 20:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.510 20:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.510 20:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.510 20:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:33.510 "name": "Existed_Raid", 00:07:33.510 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:33.510 "strip_size_kb": 0, 00:07:33.510 "state": "configuring", 00:07:33.510 "raid_level": "raid1", 00:07:33.510 "superblock": false, 00:07:33.510 "num_base_bdevs": 2, 00:07:33.510 "num_base_bdevs_discovered": 1, 00:07:33.510 "num_base_bdevs_operational": 2, 00:07:33.510 "base_bdevs_list": [ 00:07:33.510 { 00:07:33.510 "name": "BaseBdev1", 00:07:33.510 "uuid": "68421d7b-c37c-44cb-ac42-36a9cc07988c", 00:07:33.510 "is_configured": true, 00:07:33.510 "data_offset": 0, 00:07:33.510 "data_size": 65536 00:07:33.510 }, 00:07:33.510 { 00:07:33.510 "name": "BaseBdev2", 00:07:33.510 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:33.510 "is_configured": false, 00:07:33.510 "data_offset": 0, 00:07:33.510 "data_size": 0 00:07:33.510 } 00:07:33.510 ] 00:07:33.510 }' 00:07:33.510 20:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:33.510 20:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.769 20:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:33.769 20:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.769 20:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.769 [2024-12-08 20:03:05.707712] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:33.769 [2024-12-08 20:03:05.707794] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:33.769 [2024-12-08 20:03:05.707805] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:07:33.769 [2024-12-08 20:03:05.708296] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:33.769 [2024-12-08 20:03:05.708591] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:33.769 [2024-12-08 20:03:05.708618] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:33.769 [2024-12-08 20:03:05.709049] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:33.769 BaseBdev2 00:07:33.769 20:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.769 20:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:33.769 20:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:33.769 20:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:33.769 20:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:33.769 20:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:33.769 20:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:33.769 20:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:33.769 20:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.769 20:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.769 20:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.769 20:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:33.769 20:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.769 20:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.769 [ 00:07:33.769 { 00:07:33.769 "name": "BaseBdev2", 00:07:33.769 "aliases": [ 00:07:33.769 "dcef589b-9cb3-4d67-aa23-d0c4ff28f5d6" 00:07:33.769 ], 00:07:33.769 "product_name": "Malloc disk", 00:07:33.769 "block_size": 512, 00:07:33.769 "num_blocks": 65536, 00:07:33.769 "uuid": "dcef589b-9cb3-4d67-aa23-d0c4ff28f5d6", 00:07:33.769 "assigned_rate_limits": { 00:07:33.769 "rw_ios_per_sec": 0, 00:07:33.769 "rw_mbytes_per_sec": 0, 00:07:33.769 "r_mbytes_per_sec": 0, 00:07:33.769 "w_mbytes_per_sec": 0 00:07:33.769 }, 00:07:33.769 "claimed": true, 00:07:33.769 "claim_type": "exclusive_write", 00:07:33.769 "zoned": false, 00:07:33.769 "supported_io_types": { 00:07:33.769 "read": true, 00:07:33.769 "write": true, 00:07:33.769 "unmap": true, 00:07:33.769 "flush": true, 00:07:33.769 "reset": true, 00:07:33.769 "nvme_admin": false, 00:07:33.769 "nvme_io": false, 00:07:33.769 "nvme_io_md": false, 00:07:33.769 "write_zeroes": true, 00:07:33.769 "zcopy": true, 00:07:33.769 "get_zone_info": false, 00:07:33.769 "zone_management": false, 00:07:33.769 "zone_append": false, 00:07:33.769 "compare": false, 00:07:33.769 "compare_and_write": false, 00:07:33.769 "abort": true, 00:07:33.769 "seek_hole": false, 00:07:33.769 "seek_data": false, 00:07:33.769 "copy": true, 00:07:33.769 "nvme_iov_md": false 00:07:33.769 }, 00:07:33.769 "memory_domains": [ 00:07:33.769 { 00:07:33.769 "dma_device_id": "system", 00:07:33.769 "dma_device_type": 1 00:07:33.769 }, 00:07:33.769 { 00:07:33.769 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:33.769 "dma_device_type": 2 00:07:33.769 } 00:07:33.769 ], 00:07:33.769 "driver_specific": {} 00:07:33.769 } 00:07:33.769 ] 00:07:33.769 20:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.769 20:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:33.769 20:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:33.769 20:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:33.769 20:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:07:33.769 20:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:33.769 20:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:33.769 20:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:33.769 20:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:33.769 20:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:33.769 20:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:33.769 20:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:33.769 20:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:34.029 20:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:34.029 20:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:34.029 20:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.029 20:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:34.029 20:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.029 20:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.029 20:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:34.029 "name": "Existed_Raid", 00:07:34.029 "uuid": "1d545086-ff41-480f-9d74-8c88fbddcec4", 00:07:34.029 "strip_size_kb": 0, 00:07:34.029 "state": "online", 00:07:34.029 "raid_level": "raid1", 00:07:34.029 "superblock": false, 00:07:34.029 "num_base_bdevs": 2, 00:07:34.029 "num_base_bdevs_discovered": 2, 00:07:34.029 "num_base_bdevs_operational": 2, 00:07:34.029 "base_bdevs_list": [ 00:07:34.029 { 00:07:34.029 "name": "BaseBdev1", 00:07:34.029 "uuid": "68421d7b-c37c-44cb-ac42-36a9cc07988c", 00:07:34.029 "is_configured": true, 00:07:34.029 "data_offset": 0, 00:07:34.029 "data_size": 65536 00:07:34.029 }, 00:07:34.029 { 00:07:34.029 "name": "BaseBdev2", 00:07:34.029 "uuid": "dcef589b-9cb3-4d67-aa23-d0c4ff28f5d6", 00:07:34.029 "is_configured": true, 00:07:34.029 "data_offset": 0, 00:07:34.029 "data_size": 65536 00:07:34.029 } 00:07:34.029 ] 00:07:34.029 }' 00:07:34.029 20:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:34.029 20:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.288 20:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:34.288 20:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:34.288 20:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:34.288 20:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:34.288 20:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:34.288 20:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:34.288 20:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:34.288 20:03:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.288 20:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:34.288 20:03:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.288 [2024-12-08 20:03:06.191348] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:34.288 20:03:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.288 20:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:34.288 "name": "Existed_Raid", 00:07:34.288 "aliases": [ 00:07:34.288 "1d545086-ff41-480f-9d74-8c88fbddcec4" 00:07:34.288 ], 00:07:34.288 "product_name": "Raid Volume", 00:07:34.288 "block_size": 512, 00:07:34.288 "num_blocks": 65536, 00:07:34.288 "uuid": "1d545086-ff41-480f-9d74-8c88fbddcec4", 00:07:34.288 "assigned_rate_limits": { 00:07:34.288 "rw_ios_per_sec": 0, 00:07:34.288 "rw_mbytes_per_sec": 0, 00:07:34.288 "r_mbytes_per_sec": 0, 00:07:34.288 "w_mbytes_per_sec": 0 00:07:34.288 }, 00:07:34.288 "claimed": false, 00:07:34.288 "zoned": false, 00:07:34.288 "supported_io_types": { 00:07:34.288 "read": true, 00:07:34.288 "write": true, 00:07:34.288 "unmap": false, 00:07:34.288 "flush": false, 00:07:34.288 "reset": true, 00:07:34.288 "nvme_admin": false, 00:07:34.288 "nvme_io": false, 00:07:34.288 "nvme_io_md": false, 00:07:34.288 "write_zeroes": true, 00:07:34.288 "zcopy": false, 00:07:34.288 "get_zone_info": false, 00:07:34.288 "zone_management": false, 00:07:34.288 "zone_append": false, 00:07:34.288 "compare": false, 00:07:34.288 "compare_and_write": false, 00:07:34.288 "abort": false, 00:07:34.288 "seek_hole": false, 00:07:34.288 "seek_data": false, 00:07:34.288 "copy": false, 00:07:34.288 "nvme_iov_md": false 00:07:34.288 }, 00:07:34.288 "memory_domains": [ 00:07:34.288 { 00:07:34.288 "dma_device_id": "system", 00:07:34.288 "dma_device_type": 1 00:07:34.288 }, 00:07:34.288 { 00:07:34.288 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:34.288 "dma_device_type": 2 00:07:34.288 }, 00:07:34.288 { 00:07:34.288 "dma_device_id": "system", 00:07:34.288 "dma_device_type": 1 00:07:34.289 }, 00:07:34.289 { 00:07:34.289 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:34.289 "dma_device_type": 2 00:07:34.289 } 00:07:34.289 ], 00:07:34.289 "driver_specific": { 00:07:34.289 "raid": { 00:07:34.289 "uuid": "1d545086-ff41-480f-9d74-8c88fbddcec4", 00:07:34.289 "strip_size_kb": 0, 00:07:34.289 "state": "online", 00:07:34.289 "raid_level": "raid1", 00:07:34.289 "superblock": false, 00:07:34.289 "num_base_bdevs": 2, 00:07:34.289 "num_base_bdevs_discovered": 2, 00:07:34.289 "num_base_bdevs_operational": 2, 00:07:34.289 "base_bdevs_list": [ 00:07:34.289 { 00:07:34.289 "name": "BaseBdev1", 00:07:34.289 "uuid": "68421d7b-c37c-44cb-ac42-36a9cc07988c", 00:07:34.289 "is_configured": true, 00:07:34.289 "data_offset": 0, 00:07:34.289 "data_size": 65536 00:07:34.289 }, 00:07:34.289 { 00:07:34.289 "name": "BaseBdev2", 00:07:34.289 "uuid": "dcef589b-9cb3-4d67-aa23-d0c4ff28f5d6", 00:07:34.289 "is_configured": true, 00:07:34.289 "data_offset": 0, 00:07:34.289 "data_size": 65536 00:07:34.289 } 00:07:34.289 ] 00:07:34.289 } 00:07:34.289 } 00:07:34.289 }' 00:07:34.289 20:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:34.549 20:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:34.549 BaseBdev2' 00:07:34.549 20:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:34.549 20:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:34.549 20:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:34.549 20:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:34.549 20:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:34.549 20:03:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.549 20:03:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.549 20:03:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.549 20:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:34.549 20:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:34.549 20:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:34.549 20:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:34.549 20:03:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.549 20:03:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.549 20:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:34.549 20:03:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.549 20:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:34.549 20:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:34.549 20:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:34.549 20:03:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.549 20:03:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.549 [2024-12-08 20:03:06.402643] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:34.549 20:03:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.549 20:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:34.549 20:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:07:34.549 20:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:34.549 20:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:07:34.549 20:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:07:34.549 20:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:07:34.549 20:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:34.549 20:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:34.549 20:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:34.549 20:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:34.549 20:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:34.549 20:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:34.549 20:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:34.549 20:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:34.549 20:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:34.549 20:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:34.549 20:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:34.549 20:03:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.549 20:03:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.809 20:03:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.809 20:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:34.809 "name": "Existed_Raid", 00:07:34.809 "uuid": "1d545086-ff41-480f-9d74-8c88fbddcec4", 00:07:34.809 "strip_size_kb": 0, 00:07:34.809 "state": "online", 00:07:34.809 "raid_level": "raid1", 00:07:34.809 "superblock": false, 00:07:34.809 "num_base_bdevs": 2, 00:07:34.809 "num_base_bdevs_discovered": 1, 00:07:34.809 "num_base_bdevs_operational": 1, 00:07:34.809 "base_bdevs_list": [ 00:07:34.809 { 00:07:34.809 "name": null, 00:07:34.809 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:34.809 "is_configured": false, 00:07:34.809 "data_offset": 0, 00:07:34.809 "data_size": 65536 00:07:34.809 }, 00:07:34.809 { 00:07:34.809 "name": "BaseBdev2", 00:07:34.809 "uuid": "dcef589b-9cb3-4d67-aa23-d0c4ff28f5d6", 00:07:34.809 "is_configured": true, 00:07:34.809 "data_offset": 0, 00:07:34.809 "data_size": 65536 00:07:34.809 } 00:07:34.809 ] 00:07:34.809 }' 00:07:34.809 20:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:34.809 20:03:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.069 20:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:35.069 20:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:35.069 20:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:35.069 20:03:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.069 20:03:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.069 20:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:35.069 20:03:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.069 20:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:35.069 20:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:35.069 20:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:35.069 20:03:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.069 20:03:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.069 [2024-12-08 20:03:06.988982] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:35.069 [2024-12-08 20:03:06.989133] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:35.328 [2024-12-08 20:03:07.091727] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:35.328 [2024-12-08 20:03:07.091799] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:35.328 [2024-12-08 20:03:07.091816] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:35.328 20:03:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.328 20:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:35.328 20:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:35.328 20:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:35.328 20:03:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.328 20:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:35.328 20:03:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.328 20:03:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.328 20:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:35.328 20:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:35.328 20:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:35.328 20:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 62547 00:07:35.328 20:03:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 62547 ']' 00:07:35.328 20:03:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 62547 00:07:35.328 20:03:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:07:35.328 20:03:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:35.328 20:03:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62547 00:07:35.328 20:03:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:35.328 20:03:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:35.328 killing process with pid 62547 00:07:35.328 20:03:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62547' 00:07:35.328 20:03:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 62547 00:07:35.328 [2024-12-08 20:03:07.182565] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:35.328 20:03:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 62547 00:07:35.328 [2024-12-08 20:03:07.199636] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:36.707 20:03:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:07:36.707 00:07:36.707 real 0m5.090s 00:07:36.707 user 0m7.228s 00:07:36.707 sys 0m0.821s 00:07:36.707 20:03:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:36.707 20:03:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.707 ************************************ 00:07:36.707 END TEST raid_state_function_test 00:07:36.707 ************************************ 00:07:36.707 20:03:08 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:07:36.707 20:03:08 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:36.707 20:03:08 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:36.707 20:03:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:36.707 ************************************ 00:07:36.707 START TEST raid_state_function_test_sb 00:07:36.707 ************************************ 00:07:36.707 20:03:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:07:36.707 20:03:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:07:36.707 20:03:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:36.707 20:03:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:07:36.707 20:03:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:36.707 20:03:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:36.707 20:03:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:36.707 20:03:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:36.707 20:03:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:36.707 20:03:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:36.707 20:03:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:36.707 20:03:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:36.707 20:03:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:36.707 20:03:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:36.707 20:03:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:36.707 20:03:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:36.707 20:03:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:36.707 20:03:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:36.707 20:03:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:36.707 20:03:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:07:36.707 20:03:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:07:36.707 20:03:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:07:36.707 20:03:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:07:36.707 20:03:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=62796 00:07:36.707 20:03:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:36.707 20:03:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62796' 00:07:36.707 Process raid pid: 62796 00:07:36.707 20:03:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 62796 00:07:36.707 20:03:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 62796 ']' 00:07:36.707 20:03:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:36.707 20:03:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:36.707 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:36.707 20:03:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:36.707 20:03:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:36.707 20:03:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:36.708 [2024-12-08 20:03:08.598748] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:07:36.708 [2024-12-08 20:03:08.598852] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:36.967 [2024-12-08 20:03:08.772824] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.967 [2024-12-08 20:03:08.912984] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.233 [2024-12-08 20:03:09.155922] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:37.234 [2024-12-08 20:03:09.156000] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:37.520 20:03:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:37.520 20:03:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:07:37.520 20:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:37.520 20:03:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.520 20:03:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:37.520 [2024-12-08 20:03:09.428268] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:37.520 [2024-12-08 20:03:09.428357] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:37.520 [2024-12-08 20:03:09.428370] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:37.520 [2024-12-08 20:03:09.428382] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:37.520 20:03:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.520 20:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:37.520 20:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:37.520 20:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:37.520 20:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:37.520 20:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:37.520 20:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:37.520 20:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:37.520 20:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:37.520 20:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:37.520 20:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:37.520 20:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:37.520 20:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:37.520 20:03:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.520 20:03:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:37.520 20:03:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.520 20:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:37.520 "name": "Existed_Raid", 00:07:37.520 "uuid": "cf8f2dd6-adca-4765-bc0d-299e0e75c4ef", 00:07:37.520 "strip_size_kb": 0, 00:07:37.520 "state": "configuring", 00:07:37.520 "raid_level": "raid1", 00:07:37.520 "superblock": true, 00:07:37.520 "num_base_bdevs": 2, 00:07:37.520 "num_base_bdevs_discovered": 0, 00:07:37.521 "num_base_bdevs_operational": 2, 00:07:37.521 "base_bdevs_list": [ 00:07:37.521 { 00:07:37.521 "name": "BaseBdev1", 00:07:37.521 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:37.521 "is_configured": false, 00:07:37.521 "data_offset": 0, 00:07:37.521 "data_size": 0 00:07:37.521 }, 00:07:37.521 { 00:07:37.521 "name": "BaseBdev2", 00:07:37.521 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:37.521 "is_configured": false, 00:07:37.521 "data_offset": 0, 00:07:37.521 "data_size": 0 00:07:37.521 } 00:07:37.521 ] 00:07:37.521 }' 00:07:37.521 20:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:37.521 20:03:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:38.105 20:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:38.105 20:03:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.105 20:03:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:38.105 [2024-12-08 20:03:09.883436] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:38.105 [2024-12-08 20:03:09.883503] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:38.105 20:03:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.105 20:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:38.105 20:03:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.105 20:03:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:38.105 [2024-12-08 20:03:09.895348] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:38.105 [2024-12-08 20:03:09.895397] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:38.105 [2024-12-08 20:03:09.895409] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:38.105 [2024-12-08 20:03:09.895424] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:38.105 20:03:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.105 20:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:38.105 20:03:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.105 20:03:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:38.105 [2024-12-08 20:03:09.952593] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:38.105 BaseBdev1 00:07:38.105 20:03:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.105 20:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:38.105 20:03:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:38.105 20:03:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:38.105 20:03:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:38.105 20:03:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:38.105 20:03:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:38.105 20:03:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:38.105 20:03:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.105 20:03:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:38.105 20:03:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.105 20:03:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:38.105 20:03:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.105 20:03:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:38.105 [ 00:07:38.105 { 00:07:38.105 "name": "BaseBdev1", 00:07:38.105 "aliases": [ 00:07:38.105 "163fb885-b787-4436-afff-0e60171cafdf" 00:07:38.105 ], 00:07:38.105 "product_name": "Malloc disk", 00:07:38.105 "block_size": 512, 00:07:38.105 "num_blocks": 65536, 00:07:38.105 "uuid": "163fb885-b787-4436-afff-0e60171cafdf", 00:07:38.105 "assigned_rate_limits": { 00:07:38.105 "rw_ios_per_sec": 0, 00:07:38.105 "rw_mbytes_per_sec": 0, 00:07:38.105 "r_mbytes_per_sec": 0, 00:07:38.105 "w_mbytes_per_sec": 0 00:07:38.105 }, 00:07:38.106 "claimed": true, 00:07:38.106 "claim_type": "exclusive_write", 00:07:38.106 "zoned": false, 00:07:38.106 "supported_io_types": { 00:07:38.106 "read": true, 00:07:38.106 "write": true, 00:07:38.106 "unmap": true, 00:07:38.106 "flush": true, 00:07:38.106 "reset": true, 00:07:38.106 "nvme_admin": false, 00:07:38.106 "nvme_io": false, 00:07:38.106 "nvme_io_md": false, 00:07:38.106 "write_zeroes": true, 00:07:38.106 "zcopy": true, 00:07:38.106 "get_zone_info": false, 00:07:38.106 "zone_management": false, 00:07:38.106 "zone_append": false, 00:07:38.106 "compare": false, 00:07:38.106 "compare_and_write": false, 00:07:38.106 "abort": true, 00:07:38.106 "seek_hole": false, 00:07:38.106 "seek_data": false, 00:07:38.106 "copy": true, 00:07:38.106 "nvme_iov_md": false 00:07:38.106 }, 00:07:38.106 "memory_domains": [ 00:07:38.106 { 00:07:38.106 "dma_device_id": "system", 00:07:38.106 "dma_device_type": 1 00:07:38.106 }, 00:07:38.106 { 00:07:38.106 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:38.106 "dma_device_type": 2 00:07:38.106 } 00:07:38.106 ], 00:07:38.106 "driver_specific": {} 00:07:38.106 } 00:07:38.106 ] 00:07:38.106 20:03:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.106 20:03:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:38.106 20:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:38.106 20:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:38.106 20:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:38.106 20:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:38.106 20:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:38.106 20:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:38.106 20:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:38.106 20:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:38.106 20:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:38.106 20:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:38.106 20:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:38.106 20:03:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.106 20:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:38.106 20:03:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:38.106 20:03:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.106 20:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:38.106 "name": "Existed_Raid", 00:07:38.106 "uuid": "8af179e4-ece2-4db5-8285-f82f5d7b73fd", 00:07:38.106 "strip_size_kb": 0, 00:07:38.106 "state": "configuring", 00:07:38.106 "raid_level": "raid1", 00:07:38.106 "superblock": true, 00:07:38.106 "num_base_bdevs": 2, 00:07:38.106 "num_base_bdevs_discovered": 1, 00:07:38.106 "num_base_bdevs_operational": 2, 00:07:38.106 "base_bdevs_list": [ 00:07:38.106 { 00:07:38.106 "name": "BaseBdev1", 00:07:38.106 "uuid": "163fb885-b787-4436-afff-0e60171cafdf", 00:07:38.106 "is_configured": true, 00:07:38.106 "data_offset": 2048, 00:07:38.106 "data_size": 63488 00:07:38.106 }, 00:07:38.106 { 00:07:38.106 "name": "BaseBdev2", 00:07:38.106 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:38.106 "is_configured": false, 00:07:38.106 "data_offset": 0, 00:07:38.106 "data_size": 0 00:07:38.106 } 00:07:38.106 ] 00:07:38.106 }' 00:07:38.106 20:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:38.106 20:03:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:38.675 20:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:38.675 20:03:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.675 20:03:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:38.675 [2024-12-08 20:03:10.391995] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:38.675 [2024-12-08 20:03:10.392209] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:38.675 20:03:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.675 20:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:38.675 20:03:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.675 20:03:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:38.675 [2024-12-08 20:03:10.403996] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:38.675 [2024-12-08 20:03:10.406284] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:38.675 [2024-12-08 20:03:10.406379] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:38.675 20:03:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.675 20:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:38.675 20:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:38.675 20:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:38.675 20:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:38.675 20:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:38.675 20:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:38.675 20:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:38.675 20:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:38.675 20:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:38.675 20:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:38.675 20:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:38.675 20:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:38.675 20:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:38.675 20:03:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.675 20:03:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:38.675 20:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:38.675 20:03:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.675 20:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:38.675 "name": "Existed_Raid", 00:07:38.675 "uuid": "c3288ad1-979f-4972-b73d-75ab36422254", 00:07:38.675 "strip_size_kb": 0, 00:07:38.675 "state": "configuring", 00:07:38.675 "raid_level": "raid1", 00:07:38.675 "superblock": true, 00:07:38.675 "num_base_bdevs": 2, 00:07:38.675 "num_base_bdevs_discovered": 1, 00:07:38.675 "num_base_bdevs_operational": 2, 00:07:38.675 "base_bdevs_list": [ 00:07:38.675 { 00:07:38.675 "name": "BaseBdev1", 00:07:38.675 "uuid": "163fb885-b787-4436-afff-0e60171cafdf", 00:07:38.675 "is_configured": true, 00:07:38.675 "data_offset": 2048, 00:07:38.675 "data_size": 63488 00:07:38.675 }, 00:07:38.675 { 00:07:38.675 "name": "BaseBdev2", 00:07:38.675 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:38.675 "is_configured": false, 00:07:38.675 "data_offset": 0, 00:07:38.675 "data_size": 0 00:07:38.675 } 00:07:38.675 ] 00:07:38.675 }' 00:07:38.675 20:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:38.675 20:03:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:38.935 20:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:38.935 20:03:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.935 20:03:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:38.935 [2024-12-08 20:03:10.870516] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:38.935 [2024-12-08 20:03:10.871034] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:38.935 [2024-12-08 20:03:10.871097] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:38.935 [2024-12-08 20:03:10.871518] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:38.935 BaseBdev2 00:07:38.935 [2024-12-08 20:03:10.871803] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:38.935 [2024-12-08 20:03:10.871874] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:38.935 [2024-12-08 20:03:10.872126] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:38.935 20:03:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.935 20:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:38.935 20:03:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:38.935 20:03:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:38.935 20:03:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:38.935 20:03:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:38.935 20:03:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:38.935 20:03:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:38.935 20:03:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.935 20:03:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:38.935 20:03:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.935 20:03:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:38.935 20:03:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.935 20:03:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:38.935 [ 00:07:38.935 { 00:07:38.935 "name": "BaseBdev2", 00:07:38.935 "aliases": [ 00:07:38.935 "4cdc8237-9a2a-4650-b7de-952c1e63ba29" 00:07:38.935 ], 00:07:38.935 "product_name": "Malloc disk", 00:07:38.935 "block_size": 512, 00:07:38.935 "num_blocks": 65536, 00:07:38.935 "uuid": "4cdc8237-9a2a-4650-b7de-952c1e63ba29", 00:07:38.935 "assigned_rate_limits": { 00:07:38.935 "rw_ios_per_sec": 0, 00:07:38.935 "rw_mbytes_per_sec": 0, 00:07:38.935 "r_mbytes_per_sec": 0, 00:07:38.935 "w_mbytes_per_sec": 0 00:07:38.935 }, 00:07:38.935 "claimed": true, 00:07:38.935 "claim_type": "exclusive_write", 00:07:38.935 "zoned": false, 00:07:38.935 "supported_io_types": { 00:07:38.935 "read": true, 00:07:38.935 "write": true, 00:07:38.935 "unmap": true, 00:07:38.935 "flush": true, 00:07:38.935 "reset": true, 00:07:38.935 "nvme_admin": false, 00:07:38.935 "nvme_io": false, 00:07:38.935 "nvme_io_md": false, 00:07:38.935 "write_zeroes": true, 00:07:38.935 "zcopy": true, 00:07:38.935 "get_zone_info": false, 00:07:38.935 "zone_management": false, 00:07:38.935 "zone_append": false, 00:07:38.935 "compare": false, 00:07:38.936 "compare_and_write": false, 00:07:38.936 "abort": true, 00:07:38.936 "seek_hole": false, 00:07:38.936 "seek_data": false, 00:07:38.936 "copy": true, 00:07:38.936 "nvme_iov_md": false 00:07:38.936 }, 00:07:38.936 "memory_domains": [ 00:07:38.936 { 00:07:38.936 "dma_device_id": "system", 00:07:38.936 "dma_device_type": 1 00:07:38.936 }, 00:07:38.936 { 00:07:38.936 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:38.936 "dma_device_type": 2 00:07:38.936 } 00:07:38.936 ], 00:07:38.936 "driver_specific": {} 00:07:38.936 } 00:07:38.936 ] 00:07:38.936 20:03:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.195 20:03:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:39.195 20:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:39.195 20:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:39.195 20:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:07:39.195 20:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:39.195 20:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:39.195 20:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:39.195 20:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:39.195 20:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:39.195 20:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:39.195 20:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:39.195 20:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:39.195 20:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:39.195 20:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:39.195 20:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:39.195 20:03:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.195 20:03:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:39.195 20:03:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.195 20:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:39.195 "name": "Existed_Raid", 00:07:39.195 "uuid": "c3288ad1-979f-4972-b73d-75ab36422254", 00:07:39.195 "strip_size_kb": 0, 00:07:39.195 "state": "online", 00:07:39.195 "raid_level": "raid1", 00:07:39.195 "superblock": true, 00:07:39.195 "num_base_bdevs": 2, 00:07:39.195 "num_base_bdevs_discovered": 2, 00:07:39.195 "num_base_bdevs_operational": 2, 00:07:39.195 "base_bdevs_list": [ 00:07:39.195 { 00:07:39.195 "name": "BaseBdev1", 00:07:39.195 "uuid": "163fb885-b787-4436-afff-0e60171cafdf", 00:07:39.195 "is_configured": true, 00:07:39.195 "data_offset": 2048, 00:07:39.195 "data_size": 63488 00:07:39.195 }, 00:07:39.195 { 00:07:39.195 "name": "BaseBdev2", 00:07:39.195 "uuid": "4cdc8237-9a2a-4650-b7de-952c1e63ba29", 00:07:39.195 "is_configured": true, 00:07:39.195 "data_offset": 2048, 00:07:39.195 "data_size": 63488 00:07:39.195 } 00:07:39.195 ] 00:07:39.195 }' 00:07:39.195 20:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:39.195 20:03:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:39.455 20:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:39.455 20:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:39.455 20:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:39.455 20:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:39.455 20:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:07:39.455 20:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:39.455 20:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:39.455 20:03:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.455 20:03:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:39.455 20:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:39.455 [2024-12-08 20:03:11.322185] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:39.455 20:03:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.455 20:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:39.455 "name": "Existed_Raid", 00:07:39.455 "aliases": [ 00:07:39.455 "c3288ad1-979f-4972-b73d-75ab36422254" 00:07:39.455 ], 00:07:39.455 "product_name": "Raid Volume", 00:07:39.455 "block_size": 512, 00:07:39.455 "num_blocks": 63488, 00:07:39.455 "uuid": "c3288ad1-979f-4972-b73d-75ab36422254", 00:07:39.455 "assigned_rate_limits": { 00:07:39.455 "rw_ios_per_sec": 0, 00:07:39.455 "rw_mbytes_per_sec": 0, 00:07:39.455 "r_mbytes_per_sec": 0, 00:07:39.455 "w_mbytes_per_sec": 0 00:07:39.455 }, 00:07:39.455 "claimed": false, 00:07:39.455 "zoned": false, 00:07:39.455 "supported_io_types": { 00:07:39.455 "read": true, 00:07:39.455 "write": true, 00:07:39.455 "unmap": false, 00:07:39.455 "flush": false, 00:07:39.455 "reset": true, 00:07:39.455 "nvme_admin": false, 00:07:39.455 "nvme_io": false, 00:07:39.455 "nvme_io_md": false, 00:07:39.455 "write_zeroes": true, 00:07:39.455 "zcopy": false, 00:07:39.455 "get_zone_info": false, 00:07:39.455 "zone_management": false, 00:07:39.455 "zone_append": false, 00:07:39.455 "compare": false, 00:07:39.455 "compare_and_write": false, 00:07:39.455 "abort": false, 00:07:39.455 "seek_hole": false, 00:07:39.456 "seek_data": false, 00:07:39.456 "copy": false, 00:07:39.456 "nvme_iov_md": false 00:07:39.456 }, 00:07:39.456 "memory_domains": [ 00:07:39.456 { 00:07:39.456 "dma_device_id": "system", 00:07:39.456 "dma_device_type": 1 00:07:39.456 }, 00:07:39.456 { 00:07:39.456 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:39.456 "dma_device_type": 2 00:07:39.456 }, 00:07:39.456 { 00:07:39.456 "dma_device_id": "system", 00:07:39.456 "dma_device_type": 1 00:07:39.456 }, 00:07:39.456 { 00:07:39.456 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:39.456 "dma_device_type": 2 00:07:39.456 } 00:07:39.456 ], 00:07:39.456 "driver_specific": { 00:07:39.456 "raid": { 00:07:39.456 "uuid": "c3288ad1-979f-4972-b73d-75ab36422254", 00:07:39.456 "strip_size_kb": 0, 00:07:39.456 "state": "online", 00:07:39.456 "raid_level": "raid1", 00:07:39.456 "superblock": true, 00:07:39.456 "num_base_bdevs": 2, 00:07:39.456 "num_base_bdevs_discovered": 2, 00:07:39.456 "num_base_bdevs_operational": 2, 00:07:39.456 "base_bdevs_list": [ 00:07:39.456 { 00:07:39.456 "name": "BaseBdev1", 00:07:39.456 "uuid": "163fb885-b787-4436-afff-0e60171cafdf", 00:07:39.456 "is_configured": true, 00:07:39.456 "data_offset": 2048, 00:07:39.456 "data_size": 63488 00:07:39.456 }, 00:07:39.456 { 00:07:39.456 "name": "BaseBdev2", 00:07:39.456 "uuid": "4cdc8237-9a2a-4650-b7de-952c1e63ba29", 00:07:39.456 "is_configured": true, 00:07:39.456 "data_offset": 2048, 00:07:39.456 "data_size": 63488 00:07:39.456 } 00:07:39.456 ] 00:07:39.456 } 00:07:39.456 } 00:07:39.456 }' 00:07:39.456 20:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:39.456 20:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:39.456 BaseBdev2' 00:07:39.456 20:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:39.716 20:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:39.716 20:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:39.716 20:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:39.716 20:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:39.716 20:03:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.716 20:03:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:39.716 20:03:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.716 20:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:39.716 20:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:39.716 20:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:39.716 20:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:39.716 20:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:39.717 20:03:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.717 20:03:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:39.717 20:03:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.717 20:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:39.717 20:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:39.717 20:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:39.717 20:03:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.717 20:03:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:39.717 [2024-12-08 20:03:11.561539] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:39.717 20:03:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.717 20:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:39.717 20:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:07:39.717 20:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:39.717 20:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:07:39.717 20:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:07:39.717 20:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:07:39.717 20:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:39.717 20:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:39.717 20:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:39.717 20:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:39.717 20:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:39.717 20:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:39.717 20:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:39.717 20:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:39.717 20:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:39.717 20:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:39.717 20:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:39.717 20:03:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.717 20:03:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:39.717 20:03:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.977 20:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:39.977 "name": "Existed_Raid", 00:07:39.977 "uuid": "c3288ad1-979f-4972-b73d-75ab36422254", 00:07:39.977 "strip_size_kb": 0, 00:07:39.977 "state": "online", 00:07:39.977 "raid_level": "raid1", 00:07:39.977 "superblock": true, 00:07:39.977 "num_base_bdevs": 2, 00:07:39.977 "num_base_bdevs_discovered": 1, 00:07:39.977 "num_base_bdevs_operational": 1, 00:07:39.977 "base_bdevs_list": [ 00:07:39.977 { 00:07:39.977 "name": null, 00:07:39.977 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:39.977 "is_configured": false, 00:07:39.977 "data_offset": 0, 00:07:39.977 "data_size": 63488 00:07:39.977 }, 00:07:39.977 { 00:07:39.977 "name": "BaseBdev2", 00:07:39.977 "uuid": "4cdc8237-9a2a-4650-b7de-952c1e63ba29", 00:07:39.977 "is_configured": true, 00:07:39.977 "data_offset": 2048, 00:07:39.977 "data_size": 63488 00:07:39.977 } 00:07:39.977 ] 00:07:39.977 }' 00:07:39.977 20:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:39.977 20:03:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:40.236 20:03:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:40.236 20:03:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:40.237 20:03:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:40.237 20:03:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.237 20:03:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:40.237 20:03:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:40.237 20:03:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.237 20:03:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:40.237 20:03:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:40.237 20:03:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:40.237 20:03:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.237 20:03:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:40.237 [2024-12-08 20:03:12.188978] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:40.237 [2024-12-08 20:03:12.189134] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:40.496 [2024-12-08 20:03:12.293873] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:40.496 [2024-12-08 20:03:12.294050] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:40.496 [2024-12-08 20:03:12.294117] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:40.496 20:03:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.496 20:03:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:40.496 20:03:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:40.496 20:03:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:40.496 20:03:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.496 20:03:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:40.496 20:03:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:40.496 20:03:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.496 20:03:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:40.496 20:03:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:40.496 20:03:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:40.496 20:03:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 62796 00:07:40.496 20:03:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 62796 ']' 00:07:40.496 20:03:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 62796 00:07:40.496 20:03:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:07:40.496 20:03:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:40.496 20:03:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62796 00:07:40.496 20:03:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:40.496 20:03:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:40.496 20:03:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62796' 00:07:40.497 killing process with pid 62796 00:07:40.497 20:03:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 62796 00:07:40.497 [2024-12-08 20:03:12.378221] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:40.497 20:03:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 62796 00:07:40.497 [2024-12-08 20:03:12.396220] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:41.879 20:03:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:07:41.879 00:07:41.879 real 0m5.130s 00:07:41.879 user 0m7.176s 00:07:41.879 sys 0m0.921s 00:07:41.879 ************************************ 00:07:41.879 END TEST raid_state_function_test_sb 00:07:41.879 20:03:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:41.879 20:03:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:41.879 ************************************ 00:07:41.879 20:03:13 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:07:41.879 20:03:13 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:41.879 20:03:13 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:41.879 20:03:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:41.879 ************************************ 00:07:41.879 START TEST raid_superblock_test 00:07:41.879 ************************************ 00:07:41.879 20:03:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:07:41.879 20:03:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:07:41.879 20:03:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:07:41.879 20:03:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:07:41.879 20:03:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:07:41.879 20:03:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:07:41.879 20:03:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:07:41.879 20:03:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:07:41.879 20:03:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:07:41.880 20:03:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:07:41.880 20:03:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:07:41.880 20:03:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:07:41.880 20:03:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:07:41.880 20:03:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:07:41.880 20:03:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:07:41.880 20:03:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:07:41.880 20:03:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=63048 00:07:41.880 20:03:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:07:41.880 20:03:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 63048 00:07:41.880 20:03:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 63048 ']' 00:07:41.880 20:03:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:41.880 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:41.880 20:03:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:41.880 20:03:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:41.880 20:03:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:41.880 20:03:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.880 [2024-12-08 20:03:13.789935] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:07:41.880 [2024-12-08 20:03:13.790539] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63048 ] 00:07:42.140 [2024-12-08 20:03:13.962974] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.140 [2024-12-08 20:03:14.101988] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.400 [2024-12-08 20:03:14.332109] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:42.400 [2024-12-08 20:03:14.332315] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:42.660 20:03:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:42.660 20:03:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:07:42.660 20:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:07:42.660 20:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:42.660 20:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:07:42.660 20:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:07:42.660 20:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:07:42.660 20:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:42.660 20:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:42.660 20:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:42.660 20:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:07:42.660 20:03:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.660 20:03:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.921 malloc1 00:07:42.921 20:03:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.921 20:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:42.921 20:03:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.921 20:03:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.921 [2024-12-08 20:03:14.668054] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:42.921 [2024-12-08 20:03:14.668212] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:42.921 [2024-12-08 20:03:14.668258] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:42.921 [2024-12-08 20:03:14.668287] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:42.921 [2024-12-08 20:03:14.670749] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:42.921 [2024-12-08 20:03:14.670834] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:42.921 pt1 00:07:42.921 20:03:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.921 20:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:42.921 20:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:42.921 20:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:07:42.921 20:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:07:42.921 20:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:07:42.921 20:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:42.921 20:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:42.921 20:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:42.921 20:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:07:42.921 20:03:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.921 20:03:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.921 malloc2 00:07:42.921 20:03:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.921 20:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:42.921 20:03:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.921 20:03:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.921 [2024-12-08 20:03:14.729891] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:42.921 [2024-12-08 20:03:14.730042] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:42.921 [2024-12-08 20:03:14.730095] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:07:42.921 [2024-12-08 20:03:14.730135] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:42.921 [2024-12-08 20:03:14.732564] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:42.921 [2024-12-08 20:03:14.732646] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:42.921 pt2 00:07:42.921 20:03:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.921 20:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:42.921 20:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:42.921 20:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:07:42.921 20:03:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.921 20:03:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.921 [2024-12-08 20:03:14.741935] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:42.921 [2024-12-08 20:03:14.744067] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:42.921 [2024-12-08 20:03:14.744301] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:42.921 [2024-12-08 20:03:14.744371] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:42.921 [2024-12-08 20:03:14.744683] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:42.921 [2024-12-08 20:03:14.744918] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:42.921 [2024-12-08 20:03:14.744992] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:07:42.921 [2024-12-08 20:03:14.745288] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:42.921 20:03:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.922 20:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:07:42.922 20:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:42.922 20:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:42.922 20:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:42.922 20:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:42.922 20:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:42.922 20:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:42.922 20:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:42.922 20:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:42.922 20:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:42.922 20:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:42.922 20:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:42.922 20:03:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.922 20:03:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.922 20:03:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.922 20:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:42.922 "name": "raid_bdev1", 00:07:42.922 "uuid": "7e38feaa-852c-4716-9f31-51d3fc7c8179", 00:07:42.922 "strip_size_kb": 0, 00:07:42.922 "state": "online", 00:07:42.922 "raid_level": "raid1", 00:07:42.922 "superblock": true, 00:07:42.922 "num_base_bdevs": 2, 00:07:42.922 "num_base_bdevs_discovered": 2, 00:07:42.922 "num_base_bdevs_operational": 2, 00:07:42.922 "base_bdevs_list": [ 00:07:42.922 { 00:07:42.922 "name": "pt1", 00:07:42.922 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:42.922 "is_configured": true, 00:07:42.922 "data_offset": 2048, 00:07:42.922 "data_size": 63488 00:07:42.922 }, 00:07:42.922 { 00:07:42.922 "name": "pt2", 00:07:42.922 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:42.922 "is_configured": true, 00:07:42.922 "data_offset": 2048, 00:07:42.922 "data_size": 63488 00:07:42.922 } 00:07:42.922 ] 00:07:42.922 }' 00:07:42.922 20:03:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:42.922 20:03:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.492 20:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:07:43.492 20:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:43.492 20:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:43.492 20:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:43.492 20:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:43.492 20:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:43.492 20:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:43.492 20:03:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.492 20:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:43.492 20:03:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.492 [2024-12-08 20:03:15.189504] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:43.492 20:03:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.492 20:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:43.492 "name": "raid_bdev1", 00:07:43.492 "aliases": [ 00:07:43.492 "7e38feaa-852c-4716-9f31-51d3fc7c8179" 00:07:43.492 ], 00:07:43.492 "product_name": "Raid Volume", 00:07:43.492 "block_size": 512, 00:07:43.492 "num_blocks": 63488, 00:07:43.492 "uuid": "7e38feaa-852c-4716-9f31-51d3fc7c8179", 00:07:43.492 "assigned_rate_limits": { 00:07:43.492 "rw_ios_per_sec": 0, 00:07:43.492 "rw_mbytes_per_sec": 0, 00:07:43.492 "r_mbytes_per_sec": 0, 00:07:43.492 "w_mbytes_per_sec": 0 00:07:43.492 }, 00:07:43.492 "claimed": false, 00:07:43.492 "zoned": false, 00:07:43.492 "supported_io_types": { 00:07:43.492 "read": true, 00:07:43.492 "write": true, 00:07:43.492 "unmap": false, 00:07:43.492 "flush": false, 00:07:43.492 "reset": true, 00:07:43.492 "nvme_admin": false, 00:07:43.492 "nvme_io": false, 00:07:43.492 "nvme_io_md": false, 00:07:43.492 "write_zeroes": true, 00:07:43.492 "zcopy": false, 00:07:43.492 "get_zone_info": false, 00:07:43.492 "zone_management": false, 00:07:43.492 "zone_append": false, 00:07:43.492 "compare": false, 00:07:43.492 "compare_and_write": false, 00:07:43.492 "abort": false, 00:07:43.492 "seek_hole": false, 00:07:43.492 "seek_data": false, 00:07:43.492 "copy": false, 00:07:43.492 "nvme_iov_md": false 00:07:43.492 }, 00:07:43.492 "memory_domains": [ 00:07:43.492 { 00:07:43.492 "dma_device_id": "system", 00:07:43.492 "dma_device_type": 1 00:07:43.492 }, 00:07:43.492 { 00:07:43.492 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:43.492 "dma_device_type": 2 00:07:43.492 }, 00:07:43.492 { 00:07:43.492 "dma_device_id": "system", 00:07:43.492 "dma_device_type": 1 00:07:43.492 }, 00:07:43.492 { 00:07:43.492 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:43.492 "dma_device_type": 2 00:07:43.492 } 00:07:43.492 ], 00:07:43.492 "driver_specific": { 00:07:43.492 "raid": { 00:07:43.492 "uuid": "7e38feaa-852c-4716-9f31-51d3fc7c8179", 00:07:43.492 "strip_size_kb": 0, 00:07:43.492 "state": "online", 00:07:43.492 "raid_level": "raid1", 00:07:43.492 "superblock": true, 00:07:43.492 "num_base_bdevs": 2, 00:07:43.492 "num_base_bdevs_discovered": 2, 00:07:43.492 "num_base_bdevs_operational": 2, 00:07:43.492 "base_bdevs_list": [ 00:07:43.492 { 00:07:43.492 "name": "pt1", 00:07:43.492 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:43.492 "is_configured": true, 00:07:43.492 "data_offset": 2048, 00:07:43.492 "data_size": 63488 00:07:43.492 }, 00:07:43.492 { 00:07:43.492 "name": "pt2", 00:07:43.492 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:43.492 "is_configured": true, 00:07:43.492 "data_offset": 2048, 00:07:43.492 "data_size": 63488 00:07:43.492 } 00:07:43.492 ] 00:07:43.492 } 00:07:43.492 } 00:07:43.492 }' 00:07:43.492 20:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:43.492 20:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:43.492 pt2' 00:07:43.492 20:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:43.492 20:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:43.492 20:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:43.492 20:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:43.492 20:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:43.492 20:03:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.492 20:03:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.492 20:03:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.492 20:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:43.492 20:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:43.492 20:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:43.492 20:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:43.492 20:03:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.492 20:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:43.492 20:03:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.492 20:03:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.492 20:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:43.492 20:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:43.492 20:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:43.492 20:03:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.492 20:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:07:43.492 20:03:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.492 [2024-12-08 20:03:15.421086] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:43.492 20:03:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.492 20:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=7e38feaa-852c-4716-9f31-51d3fc7c8179 00:07:43.492 20:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 7e38feaa-852c-4716-9f31-51d3fc7c8179 ']' 00:07:43.492 20:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:43.492 20:03:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.492 20:03:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.752 [2024-12-08 20:03:15.468684] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:43.752 [2024-12-08 20:03:15.468815] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:43.752 [2024-12-08 20:03:15.468987] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:43.752 [2024-12-08 20:03:15.469118] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:43.752 [2024-12-08 20:03:15.469178] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:07:43.752 20:03:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.752 20:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:07:43.752 20:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:43.752 20:03:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.752 20:03:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.752 20:03:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.752 20:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:07:43.752 20:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:07:43.752 20:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:43.752 20:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:07:43.752 20:03:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.752 20:03:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.752 20:03:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.752 20:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:43.752 20:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:07:43.752 20:03:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.752 20:03:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.752 20:03:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.752 20:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:07:43.752 20:03:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.752 20:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:07:43.752 20:03:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.752 20:03:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.752 20:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:07:43.752 20:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:43.752 20:03:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:07:43.752 20:03:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:43.752 20:03:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:43.752 20:03:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:43.752 20:03:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:43.752 20:03:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:43.752 20:03:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:43.752 20:03:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.752 20:03:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.752 [2024-12-08 20:03:15.608515] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:43.752 [2024-12-08 20:03:15.610983] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:43.752 [2024-12-08 20:03:15.611081] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:07:43.752 [2024-12-08 20:03:15.611158] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:07:43.752 [2024-12-08 20:03:15.611188] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:43.752 [2024-12-08 20:03:15.611203] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:07:43.752 request: 00:07:43.752 { 00:07:43.752 "name": "raid_bdev1", 00:07:43.752 "raid_level": "raid1", 00:07:43.752 "base_bdevs": [ 00:07:43.752 "malloc1", 00:07:43.752 "malloc2" 00:07:43.752 ], 00:07:43.752 "superblock": false, 00:07:43.752 "method": "bdev_raid_create", 00:07:43.752 "req_id": 1 00:07:43.752 } 00:07:43.752 Got JSON-RPC error response 00:07:43.752 response: 00:07:43.752 { 00:07:43.752 "code": -17, 00:07:43.752 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:07:43.752 } 00:07:43.752 20:03:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:43.752 20:03:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:07:43.752 20:03:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:43.752 20:03:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:43.752 20:03:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:43.752 20:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:43.752 20:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:07:43.752 20:03:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.753 20:03:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.753 20:03:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.753 20:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:07:43.753 20:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:07:43.753 20:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:43.753 20:03:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.753 20:03:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.753 [2024-12-08 20:03:15.672380] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:43.753 [2024-12-08 20:03:15.672589] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:43.753 [2024-12-08 20:03:15.672639] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:07:43.753 [2024-12-08 20:03:15.672682] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:43.753 [2024-12-08 20:03:15.675475] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:43.753 [2024-12-08 20:03:15.675569] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:43.753 [2024-12-08 20:03:15.675747] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:43.753 [2024-12-08 20:03:15.675867] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:43.753 pt1 00:07:43.753 20:03:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.753 20:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:07:43.753 20:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:43.753 20:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:43.753 20:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:43.753 20:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:43.753 20:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:43.753 20:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:43.753 20:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:43.753 20:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:43.753 20:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:43.753 20:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:43.753 20:03:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.753 20:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:43.753 20:03:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.753 20:03:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.013 20:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:44.013 "name": "raid_bdev1", 00:07:44.013 "uuid": "7e38feaa-852c-4716-9f31-51d3fc7c8179", 00:07:44.013 "strip_size_kb": 0, 00:07:44.013 "state": "configuring", 00:07:44.013 "raid_level": "raid1", 00:07:44.013 "superblock": true, 00:07:44.013 "num_base_bdevs": 2, 00:07:44.013 "num_base_bdevs_discovered": 1, 00:07:44.013 "num_base_bdevs_operational": 2, 00:07:44.013 "base_bdevs_list": [ 00:07:44.013 { 00:07:44.013 "name": "pt1", 00:07:44.013 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:44.013 "is_configured": true, 00:07:44.013 "data_offset": 2048, 00:07:44.013 "data_size": 63488 00:07:44.013 }, 00:07:44.013 { 00:07:44.013 "name": null, 00:07:44.013 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:44.013 "is_configured": false, 00:07:44.013 "data_offset": 2048, 00:07:44.013 "data_size": 63488 00:07:44.013 } 00:07:44.013 ] 00:07:44.013 }' 00:07:44.013 20:03:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:44.014 20:03:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.274 20:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:07:44.274 20:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:07:44.274 20:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:44.274 20:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:44.274 20:03:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.274 20:03:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.274 [2024-12-08 20:03:16.119584] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:44.274 [2024-12-08 20:03:16.119699] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:44.274 [2024-12-08 20:03:16.119728] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:07:44.274 [2024-12-08 20:03:16.119743] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:44.274 [2024-12-08 20:03:16.120422] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:44.274 [2024-12-08 20:03:16.120467] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:44.274 [2024-12-08 20:03:16.120581] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:44.274 [2024-12-08 20:03:16.120622] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:44.274 [2024-12-08 20:03:16.120772] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:44.274 [2024-12-08 20:03:16.120786] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:44.274 [2024-12-08 20:03:16.121099] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:07:44.274 [2024-12-08 20:03:16.121286] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:44.274 [2024-12-08 20:03:16.121296] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:44.274 [2024-12-08 20:03:16.121454] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:44.274 pt2 00:07:44.274 20:03:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.274 20:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:07:44.274 20:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:44.274 20:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:07:44.274 20:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:44.274 20:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:44.274 20:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:44.274 20:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:44.274 20:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:44.274 20:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:44.274 20:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:44.274 20:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:44.274 20:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:44.274 20:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:44.274 20:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:44.275 20:03:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.275 20:03:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.275 20:03:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.275 20:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:44.275 "name": "raid_bdev1", 00:07:44.275 "uuid": "7e38feaa-852c-4716-9f31-51d3fc7c8179", 00:07:44.275 "strip_size_kb": 0, 00:07:44.275 "state": "online", 00:07:44.275 "raid_level": "raid1", 00:07:44.275 "superblock": true, 00:07:44.275 "num_base_bdevs": 2, 00:07:44.275 "num_base_bdevs_discovered": 2, 00:07:44.275 "num_base_bdevs_operational": 2, 00:07:44.275 "base_bdevs_list": [ 00:07:44.275 { 00:07:44.275 "name": "pt1", 00:07:44.275 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:44.275 "is_configured": true, 00:07:44.275 "data_offset": 2048, 00:07:44.275 "data_size": 63488 00:07:44.275 }, 00:07:44.275 { 00:07:44.275 "name": "pt2", 00:07:44.275 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:44.275 "is_configured": true, 00:07:44.275 "data_offset": 2048, 00:07:44.275 "data_size": 63488 00:07:44.275 } 00:07:44.275 ] 00:07:44.275 }' 00:07:44.275 20:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:44.275 20:03:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.845 20:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:07:44.845 20:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:44.845 20:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:44.845 20:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:44.845 20:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:44.845 20:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:44.846 20:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:44.846 20:03:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.846 20:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:44.846 20:03:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.846 [2024-12-08 20:03:16.583225] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:44.846 20:03:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.846 20:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:44.846 "name": "raid_bdev1", 00:07:44.846 "aliases": [ 00:07:44.846 "7e38feaa-852c-4716-9f31-51d3fc7c8179" 00:07:44.846 ], 00:07:44.846 "product_name": "Raid Volume", 00:07:44.846 "block_size": 512, 00:07:44.846 "num_blocks": 63488, 00:07:44.846 "uuid": "7e38feaa-852c-4716-9f31-51d3fc7c8179", 00:07:44.846 "assigned_rate_limits": { 00:07:44.846 "rw_ios_per_sec": 0, 00:07:44.846 "rw_mbytes_per_sec": 0, 00:07:44.846 "r_mbytes_per_sec": 0, 00:07:44.846 "w_mbytes_per_sec": 0 00:07:44.846 }, 00:07:44.846 "claimed": false, 00:07:44.846 "zoned": false, 00:07:44.846 "supported_io_types": { 00:07:44.846 "read": true, 00:07:44.846 "write": true, 00:07:44.846 "unmap": false, 00:07:44.846 "flush": false, 00:07:44.846 "reset": true, 00:07:44.846 "nvme_admin": false, 00:07:44.846 "nvme_io": false, 00:07:44.846 "nvme_io_md": false, 00:07:44.846 "write_zeroes": true, 00:07:44.846 "zcopy": false, 00:07:44.846 "get_zone_info": false, 00:07:44.846 "zone_management": false, 00:07:44.846 "zone_append": false, 00:07:44.846 "compare": false, 00:07:44.846 "compare_and_write": false, 00:07:44.846 "abort": false, 00:07:44.846 "seek_hole": false, 00:07:44.846 "seek_data": false, 00:07:44.846 "copy": false, 00:07:44.846 "nvme_iov_md": false 00:07:44.846 }, 00:07:44.846 "memory_domains": [ 00:07:44.846 { 00:07:44.846 "dma_device_id": "system", 00:07:44.846 "dma_device_type": 1 00:07:44.846 }, 00:07:44.846 { 00:07:44.846 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:44.846 "dma_device_type": 2 00:07:44.846 }, 00:07:44.846 { 00:07:44.846 "dma_device_id": "system", 00:07:44.846 "dma_device_type": 1 00:07:44.846 }, 00:07:44.846 { 00:07:44.846 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:44.846 "dma_device_type": 2 00:07:44.846 } 00:07:44.846 ], 00:07:44.846 "driver_specific": { 00:07:44.846 "raid": { 00:07:44.846 "uuid": "7e38feaa-852c-4716-9f31-51d3fc7c8179", 00:07:44.846 "strip_size_kb": 0, 00:07:44.846 "state": "online", 00:07:44.846 "raid_level": "raid1", 00:07:44.846 "superblock": true, 00:07:44.846 "num_base_bdevs": 2, 00:07:44.846 "num_base_bdevs_discovered": 2, 00:07:44.846 "num_base_bdevs_operational": 2, 00:07:44.846 "base_bdevs_list": [ 00:07:44.846 { 00:07:44.846 "name": "pt1", 00:07:44.846 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:44.846 "is_configured": true, 00:07:44.846 "data_offset": 2048, 00:07:44.846 "data_size": 63488 00:07:44.846 }, 00:07:44.846 { 00:07:44.846 "name": "pt2", 00:07:44.846 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:44.846 "is_configured": true, 00:07:44.846 "data_offset": 2048, 00:07:44.846 "data_size": 63488 00:07:44.846 } 00:07:44.846 ] 00:07:44.846 } 00:07:44.846 } 00:07:44.846 }' 00:07:44.846 20:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:44.846 20:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:44.846 pt2' 00:07:44.846 20:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:44.846 20:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:44.846 20:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:44.846 20:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:44.846 20:03:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.846 20:03:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.846 20:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:44.846 20:03:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.846 20:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:44.846 20:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:44.846 20:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:44.846 20:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:44.846 20:03:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.846 20:03:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.846 20:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:44.846 20:03:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.846 20:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:44.846 20:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:44.846 20:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:44.846 20:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:07:44.846 20:03:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.846 20:03:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.846 [2024-12-08 20:03:16.814742] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:45.107 20:03:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.107 20:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 7e38feaa-852c-4716-9f31-51d3fc7c8179 '!=' 7e38feaa-852c-4716-9f31-51d3fc7c8179 ']' 00:07:45.107 20:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:07:45.107 20:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:45.107 20:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:07:45.107 20:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:07:45.107 20:03:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.107 20:03:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.107 [2024-12-08 20:03:16.862489] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:07:45.107 20:03:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.107 20:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:07:45.107 20:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:45.107 20:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:45.107 20:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:45.107 20:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:45.107 20:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:45.107 20:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:45.107 20:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:45.107 20:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:45.107 20:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:45.107 20:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:45.107 20:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:45.107 20:03:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.107 20:03:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.107 20:03:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.107 20:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:45.107 "name": "raid_bdev1", 00:07:45.107 "uuid": "7e38feaa-852c-4716-9f31-51d3fc7c8179", 00:07:45.107 "strip_size_kb": 0, 00:07:45.107 "state": "online", 00:07:45.107 "raid_level": "raid1", 00:07:45.107 "superblock": true, 00:07:45.107 "num_base_bdevs": 2, 00:07:45.107 "num_base_bdevs_discovered": 1, 00:07:45.107 "num_base_bdevs_operational": 1, 00:07:45.107 "base_bdevs_list": [ 00:07:45.107 { 00:07:45.107 "name": null, 00:07:45.107 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:45.107 "is_configured": false, 00:07:45.107 "data_offset": 0, 00:07:45.107 "data_size": 63488 00:07:45.107 }, 00:07:45.107 { 00:07:45.107 "name": "pt2", 00:07:45.107 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:45.107 "is_configured": true, 00:07:45.107 "data_offset": 2048, 00:07:45.107 "data_size": 63488 00:07:45.107 } 00:07:45.107 ] 00:07:45.107 }' 00:07:45.107 20:03:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:45.107 20:03:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.367 20:03:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:45.367 20:03:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.367 20:03:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.367 [2024-12-08 20:03:17.269791] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:45.367 [2024-12-08 20:03:17.269842] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:45.368 [2024-12-08 20:03:17.269963] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:45.368 [2024-12-08 20:03:17.270025] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:45.368 [2024-12-08 20:03:17.270040] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:45.368 20:03:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.368 20:03:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:45.368 20:03:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.368 20:03:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.368 20:03:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:07:45.368 20:03:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.368 20:03:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:07:45.368 20:03:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:07:45.368 20:03:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:07:45.368 20:03:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:07:45.368 20:03:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:07:45.368 20:03:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.368 20:03:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.368 20:03:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.368 20:03:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:07:45.368 20:03:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:07:45.368 20:03:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:07:45.368 20:03:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:07:45.368 20:03:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=1 00:07:45.368 20:03:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:45.368 20:03:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.368 20:03:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.368 [2024-12-08 20:03:17.341578] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:45.368 [2024-12-08 20:03:17.341648] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:45.368 [2024-12-08 20:03:17.341668] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:07:45.368 [2024-12-08 20:03:17.341683] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:45.627 [2024-12-08 20:03:17.344320] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:45.627 [2024-12-08 20:03:17.344367] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:45.627 [2024-12-08 20:03:17.344463] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:45.627 [2024-12-08 20:03:17.344516] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:45.627 [2024-12-08 20:03:17.344642] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:07:45.627 [2024-12-08 20:03:17.344657] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:45.627 [2024-12-08 20:03:17.344971] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:07:45.627 [2024-12-08 20:03:17.345157] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:07:45.627 [2024-12-08 20:03:17.345170] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:07:45.627 [2024-12-08 20:03:17.345383] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:45.627 pt2 00:07:45.627 20:03:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.627 20:03:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:07:45.627 20:03:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:45.627 20:03:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:45.627 20:03:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:45.627 20:03:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:45.627 20:03:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:45.627 20:03:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:45.627 20:03:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:45.627 20:03:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:45.627 20:03:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:45.627 20:03:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:45.627 20:03:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:45.627 20:03:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.627 20:03:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.627 20:03:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.627 20:03:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:45.627 "name": "raid_bdev1", 00:07:45.627 "uuid": "7e38feaa-852c-4716-9f31-51d3fc7c8179", 00:07:45.627 "strip_size_kb": 0, 00:07:45.627 "state": "online", 00:07:45.627 "raid_level": "raid1", 00:07:45.627 "superblock": true, 00:07:45.627 "num_base_bdevs": 2, 00:07:45.627 "num_base_bdevs_discovered": 1, 00:07:45.627 "num_base_bdevs_operational": 1, 00:07:45.627 "base_bdevs_list": [ 00:07:45.627 { 00:07:45.627 "name": null, 00:07:45.627 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:45.627 "is_configured": false, 00:07:45.627 "data_offset": 2048, 00:07:45.627 "data_size": 63488 00:07:45.627 }, 00:07:45.627 { 00:07:45.627 "name": "pt2", 00:07:45.627 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:45.627 "is_configured": true, 00:07:45.627 "data_offset": 2048, 00:07:45.627 "data_size": 63488 00:07:45.627 } 00:07:45.627 ] 00:07:45.627 }' 00:07:45.627 20:03:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:45.627 20:03:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.887 20:03:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:45.887 20:03:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.887 20:03:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.887 [2024-12-08 20:03:17.812796] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:45.887 [2024-12-08 20:03:17.812925] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:45.887 [2024-12-08 20:03:17.813073] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:45.887 [2024-12-08 20:03:17.813184] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:45.887 [2024-12-08 20:03:17.813253] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:07:45.887 20:03:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.887 20:03:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:45.887 20:03:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.887 20:03:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.887 20:03:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:07:45.887 20:03:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.146 20:03:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:07:46.146 20:03:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:07:46.146 20:03:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:07:46.146 20:03:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:46.146 20:03:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.146 20:03:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.146 [2024-12-08 20:03:17.872691] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:46.146 [2024-12-08 20:03:17.872765] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:46.146 [2024-12-08 20:03:17.872788] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:07:46.146 [2024-12-08 20:03:17.872799] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:46.146 [2024-12-08 20:03:17.875397] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:46.146 [2024-12-08 20:03:17.875493] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:46.146 [2024-12-08 20:03:17.875605] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:46.146 [2024-12-08 20:03:17.875657] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:46.146 [2024-12-08 20:03:17.875834] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:07:46.146 [2024-12-08 20:03:17.875850] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:46.146 [2024-12-08 20:03:17.875868] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:07:46.146 [2024-12-08 20:03:17.875921] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:46.146 [2024-12-08 20:03:17.876026] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:07:46.146 [2024-12-08 20:03:17.876037] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:46.146 [2024-12-08 20:03:17.876329] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:07:46.146 [2024-12-08 20:03:17.876506] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:07:46.146 [2024-12-08 20:03:17.876521] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:07:46.146 [2024-12-08 20:03:17.876716] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:46.146 pt1 00:07:46.146 20:03:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.146 20:03:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:07:46.146 20:03:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:07:46.146 20:03:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:46.146 20:03:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:46.146 20:03:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:46.146 20:03:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:46.146 20:03:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:46.146 20:03:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:46.146 20:03:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:46.146 20:03:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:46.146 20:03:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:46.147 20:03:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:46.147 20:03:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:46.147 20:03:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.147 20:03:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.147 20:03:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.147 20:03:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:46.147 "name": "raid_bdev1", 00:07:46.147 "uuid": "7e38feaa-852c-4716-9f31-51d3fc7c8179", 00:07:46.147 "strip_size_kb": 0, 00:07:46.147 "state": "online", 00:07:46.147 "raid_level": "raid1", 00:07:46.147 "superblock": true, 00:07:46.147 "num_base_bdevs": 2, 00:07:46.147 "num_base_bdevs_discovered": 1, 00:07:46.147 "num_base_bdevs_operational": 1, 00:07:46.147 "base_bdevs_list": [ 00:07:46.147 { 00:07:46.147 "name": null, 00:07:46.147 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:46.147 "is_configured": false, 00:07:46.147 "data_offset": 2048, 00:07:46.147 "data_size": 63488 00:07:46.147 }, 00:07:46.147 { 00:07:46.147 "name": "pt2", 00:07:46.147 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:46.147 "is_configured": true, 00:07:46.147 "data_offset": 2048, 00:07:46.147 "data_size": 63488 00:07:46.147 } 00:07:46.147 ] 00:07:46.147 }' 00:07:46.147 20:03:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:46.147 20:03:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.406 20:03:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:07:46.406 20:03:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:07:46.406 20:03:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.406 20:03:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.406 20:03:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.406 20:03:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:07:46.406 20:03:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:07:46.406 20:03:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:46.406 20:03:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.406 20:03:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.406 [2024-12-08 20:03:18.312289] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:46.406 20:03:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.406 20:03:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 7e38feaa-852c-4716-9f31-51d3fc7c8179 '!=' 7e38feaa-852c-4716-9f31-51d3fc7c8179 ']' 00:07:46.406 20:03:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 63048 00:07:46.406 20:03:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 63048 ']' 00:07:46.406 20:03:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 63048 00:07:46.406 20:03:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:07:46.406 20:03:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:46.406 20:03:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63048 00:07:46.406 killing process with pid 63048 00:07:46.406 20:03:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:46.406 20:03:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:46.406 20:03:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63048' 00:07:46.406 20:03:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 63048 00:07:46.407 20:03:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 63048 00:07:46.407 [2024-12-08 20:03:18.365528] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:46.407 [2024-12-08 20:03:18.365648] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:46.407 [2024-12-08 20:03:18.365739] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:46.407 [2024-12-08 20:03:18.365759] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:07:46.666 [2024-12-08 20:03:18.582710] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:48.052 20:03:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:07:48.052 ************************************ 00:07:48.052 END TEST raid_superblock_test 00:07:48.052 ************************************ 00:07:48.052 00:07:48.052 real 0m6.106s 00:07:48.052 user 0m9.042s 00:07:48.052 sys 0m1.128s 00:07:48.052 20:03:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:48.052 20:03:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.052 20:03:19 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:07:48.052 20:03:19 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:48.052 20:03:19 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:48.052 20:03:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:48.052 ************************************ 00:07:48.052 START TEST raid_read_error_test 00:07:48.052 ************************************ 00:07:48.052 20:03:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 2 read 00:07:48.052 20:03:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:07:48.052 20:03:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:48.052 20:03:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:07:48.052 20:03:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:48.052 20:03:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:48.052 20:03:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:48.052 20:03:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:48.052 20:03:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:48.052 20:03:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:48.052 20:03:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:48.052 20:03:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:48.052 20:03:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:48.052 20:03:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:48.052 20:03:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:48.052 20:03:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:48.052 20:03:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:48.052 20:03:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:48.052 20:03:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:48.052 20:03:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:07:48.052 20:03:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:07:48.052 20:03:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:48.052 20:03:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.X6Ffd7s93v 00:07:48.052 20:03:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63378 00:07:48.052 20:03:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63378 00:07:48.052 20:03:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:48.052 20:03:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 63378 ']' 00:07:48.052 20:03:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:48.052 20:03:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:48.052 20:03:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:48.052 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:48.052 20:03:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:48.052 20:03:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.052 [2024-12-08 20:03:19.991422] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:07:48.052 [2024-12-08 20:03:19.991683] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63378 ] 00:07:48.311 [2024-12-08 20:03:20.167143] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.571 [2024-12-08 20:03:20.314068] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.829 [2024-12-08 20:03:20.559924] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:48.829 [2024-12-08 20:03:20.560102] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:48.829 20:03:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:48.829 20:03:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:07:48.829 20:03:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:48.829 20:03:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:48.829 20:03:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.829 20:03:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.089 BaseBdev1_malloc 00:07:49.089 20:03:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.089 20:03:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:49.089 20:03:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.089 20:03:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.089 true 00:07:49.089 20:03:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.089 20:03:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:49.089 20:03:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.089 20:03:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.089 [2024-12-08 20:03:20.862705] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:49.089 [2024-12-08 20:03:20.862880] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:49.089 [2024-12-08 20:03:20.862913] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:49.089 [2024-12-08 20:03:20.862957] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:49.089 [2024-12-08 20:03:20.865780] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:49.089 [2024-12-08 20:03:20.865836] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:49.089 BaseBdev1 00:07:49.089 20:03:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.089 20:03:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:49.089 20:03:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:49.089 20:03:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.089 20:03:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.089 BaseBdev2_malloc 00:07:49.089 20:03:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.089 20:03:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:49.089 20:03:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.089 20:03:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.089 true 00:07:49.089 20:03:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.089 20:03:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:49.089 20:03:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.089 20:03:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.089 [2024-12-08 20:03:20.939250] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:49.089 [2024-12-08 20:03:20.939336] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:49.089 [2024-12-08 20:03:20.939358] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:49.089 [2024-12-08 20:03:20.939373] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:49.089 [2024-12-08 20:03:20.942000] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:49.089 [2024-12-08 20:03:20.942045] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:49.089 BaseBdev2 00:07:49.089 20:03:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.089 20:03:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:49.089 20:03:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.089 20:03:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.089 [2024-12-08 20:03:20.951296] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:49.089 [2024-12-08 20:03:20.953537] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:49.089 [2024-12-08 20:03:20.953778] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:49.089 [2024-12-08 20:03:20.953796] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:49.089 [2024-12-08 20:03:20.954090] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:07:49.089 [2024-12-08 20:03:20.954300] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:49.089 [2024-12-08 20:03:20.954323] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:49.089 [2024-12-08 20:03:20.954496] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:49.089 20:03:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.089 20:03:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:07:49.089 20:03:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:49.089 20:03:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:49.089 20:03:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:49.089 20:03:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:49.089 20:03:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:49.089 20:03:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:49.089 20:03:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:49.090 20:03:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:49.090 20:03:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:49.090 20:03:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:49.090 20:03:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.090 20:03:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:49.090 20:03:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.090 20:03:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.090 20:03:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:49.090 "name": "raid_bdev1", 00:07:49.090 "uuid": "fd92974f-081a-436e-b8a7-cebeddcd808c", 00:07:49.090 "strip_size_kb": 0, 00:07:49.090 "state": "online", 00:07:49.090 "raid_level": "raid1", 00:07:49.090 "superblock": true, 00:07:49.090 "num_base_bdevs": 2, 00:07:49.090 "num_base_bdevs_discovered": 2, 00:07:49.090 "num_base_bdevs_operational": 2, 00:07:49.090 "base_bdevs_list": [ 00:07:49.090 { 00:07:49.090 "name": "BaseBdev1", 00:07:49.090 "uuid": "d97b7680-9a0a-5d64-8a76-11b7ea934a73", 00:07:49.090 "is_configured": true, 00:07:49.090 "data_offset": 2048, 00:07:49.090 "data_size": 63488 00:07:49.090 }, 00:07:49.090 { 00:07:49.090 "name": "BaseBdev2", 00:07:49.090 "uuid": "0f52ef02-45e8-57e5-a206-26910f980343", 00:07:49.090 "is_configured": true, 00:07:49.090 "data_offset": 2048, 00:07:49.090 "data_size": 63488 00:07:49.090 } 00:07:49.090 ] 00:07:49.090 }' 00:07:49.090 20:03:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:49.090 20:03:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.659 20:03:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:49.659 20:03:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:49.659 [2024-12-08 20:03:21.488016] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:07:50.597 20:03:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:07:50.597 20:03:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.597 20:03:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.597 20:03:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.597 20:03:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:50.597 20:03:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:07:50.597 20:03:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:07:50.597 20:03:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:50.597 20:03:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:07:50.597 20:03:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:50.597 20:03:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:50.597 20:03:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:50.597 20:03:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:50.597 20:03:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:50.597 20:03:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:50.597 20:03:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:50.597 20:03:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:50.597 20:03:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:50.597 20:03:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:50.597 20:03:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:50.597 20:03:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.597 20:03:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.597 20:03:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.597 20:03:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:50.597 "name": "raid_bdev1", 00:07:50.597 "uuid": "fd92974f-081a-436e-b8a7-cebeddcd808c", 00:07:50.597 "strip_size_kb": 0, 00:07:50.597 "state": "online", 00:07:50.597 "raid_level": "raid1", 00:07:50.597 "superblock": true, 00:07:50.597 "num_base_bdevs": 2, 00:07:50.597 "num_base_bdevs_discovered": 2, 00:07:50.597 "num_base_bdevs_operational": 2, 00:07:50.597 "base_bdevs_list": [ 00:07:50.597 { 00:07:50.597 "name": "BaseBdev1", 00:07:50.597 "uuid": "d97b7680-9a0a-5d64-8a76-11b7ea934a73", 00:07:50.597 "is_configured": true, 00:07:50.597 "data_offset": 2048, 00:07:50.597 "data_size": 63488 00:07:50.597 }, 00:07:50.597 { 00:07:50.597 "name": "BaseBdev2", 00:07:50.597 "uuid": "0f52ef02-45e8-57e5-a206-26910f980343", 00:07:50.597 "is_configured": true, 00:07:50.597 "data_offset": 2048, 00:07:50.597 "data_size": 63488 00:07:50.597 } 00:07:50.597 ] 00:07:50.597 }' 00:07:50.597 20:03:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:50.597 20:03:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.166 20:03:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:51.166 20:03:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.166 20:03:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.166 [2024-12-08 20:03:22.870154] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:51.166 [2024-12-08 20:03:22.870314] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:51.166 [2024-12-08 20:03:22.873196] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:51.166 [2024-12-08 20:03:22.873303] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:51.166 [2024-12-08 20:03:22.873459] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:51.166 [2024-12-08 20:03:22.873534] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:51.166 { 00:07:51.166 "results": [ 00:07:51.166 { 00:07:51.166 "job": "raid_bdev1", 00:07:51.166 "core_mask": "0x1", 00:07:51.166 "workload": "randrw", 00:07:51.166 "percentage": 50, 00:07:51.166 "status": "finished", 00:07:51.166 "queue_depth": 1, 00:07:51.166 "io_size": 131072, 00:07:51.166 "runtime": 1.3829, 00:07:51.166 "iops": 13262.708800347096, 00:07:51.166 "mibps": 1657.838600043387, 00:07:51.166 "io_failed": 0, 00:07:51.166 "io_timeout": 0, 00:07:51.166 "avg_latency_us": 72.49833115441125, 00:07:51.166 "min_latency_us": 24.370305676855896, 00:07:51.166 "max_latency_us": 1509.6174672489083 00:07:51.166 } 00:07:51.166 ], 00:07:51.166 "core_count": 1 00:07:51.166 } 00:07:51.166 20:03:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.166 20:03:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63378 00:07:51.166 20:03:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 63378 ']' 00:07:51.166 20:03:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 63378 00:07:51.166 20:03:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:07:51.166 20:03:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:51.166 20:03:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63378 00:07:51.166 20:03:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:51.166 killing process with pid 63378 00:07:51.166 20:03:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:51.166 20:03:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63378' 00:07:51.166 20:03:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 63378 00:07:51.166 [2024-12-08 20:03:22.918151] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:51.166 20:03:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 63378 00:07:51.166 [2024-12-08 20:03:23.075690] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:52.543 20:03:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.X6Ffd7s93v 00:07:52.543 20:03:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:52.543 20:03:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:52.543 20:03:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:07:52.543 20:03:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:07:52.543 ************************************ 00:07:52.543 END TEST raid_read_error_test 00:07:52.543 ************************************ 00:07:52.543 20:03:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:52.543 20:03:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:07:52.543 20:03:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:07:52.543 00:07:52.543 real 0m4.513s 00:07:52.543 user 0m5.231s 00:07:52.543 sys 0m0.664s 00:07:52.543 20:03:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:52.543 20:03:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.543 20:03:24 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:07:52.543 20:03:24 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:52.543 20:03:24 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:52.543 20:03:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:52.543 ************************************ 00:07:52.543 START TEST raid_write_error_test 00:07:52.543 ************************************ 00:07:52.543 20:03:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 2 write 00:07:52.543 20:03:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:07:52.543 20:03:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:52.543 20:03:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:07:52.543 20:03:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:52.543 20:03:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:52.543 20:03:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:52.543 20:03:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:52.543 20:03:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:52.543 20:03:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:52.543 20:03:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:52.543 20:03:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:52.543 20:03:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:52.543 20:03:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:52.543 20:03:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:52.543 20:03:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:52.543 20:03:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:52.543 20:03:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:52.543 20:03:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:52.543 20:03:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:07:52.543 20:03:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:07:52.543 20:03:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:52.543 20:03:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.qkpi9ghZi1 00:07:52.543 20:03:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63518 00:07:52.543 20:03:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:52.543 20:03:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63518 00:07:52.543 20:03:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 63518 ']' 00:07:52.543 20:03:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:52.543 20:03:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:52.543 20:03:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:52.543 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:52.543 20:03:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:52.543 20:03:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.801 [2024-12-08 20:03:24.580170] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:07:52.801 [2024-12-08 20:03:24.580431] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63518 ] 00:07:52.801 [2024-12-08 20:03:24.761816] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.060 [2024-12-08 20:03:24.899170] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.330 [2024-12-08 20:03:25.131162] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:53.330 [2024-12-08 20:03:25.131342] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:53.590 20:03:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:53.590 20:03:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:07:53.590 20:03:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:53.590 20:03:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:53.590 20:03:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.590 20:03:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.590 BaseBdev1_malloc 00:07:53.590 20:03:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.590 20:03:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:53.590 20:03:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.590 20:03:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.590 true 00:07:53.590 20:03:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.590 20:03:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:53.590 20:03:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.590 20:03:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.590 [2024-12-08 20:03:25.452480] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:53.590 [2024-12-08 20:03:25.452564] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:53.590 [2024-12-08 20:03:25.452588] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:53.590 [2024-12-08 20:03:25.452602] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:53.590 [2024-12-08 20:03:25.455036] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:53.590 [2024-12-08 20:03:25.455079] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:53.590 BaseBdev1 00:07:53.590 20:03:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.590 20:03:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:53.590 20:03:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:53.590 20:03:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.590 20:03:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.590 BaseBdev2_malloc 00:07:53.590 20:03:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.590 20:03:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:53.590 20:03:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.590 20:03:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.590 true 00:07:53.590 20:03:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.590 20:03:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:53.590 20:03:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.590 20:03:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.590 [2024-12-08 20:03:25.526029] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:53.590 [2024-12-08 20:03:25.526095] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:53.590 [2024-12-08 20:03:25.526113] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:53.590 [2024-12-08 20:03:25.526126] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:53.590 [2024-12-08 20:03:25.528506] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:53.590 [2024-12-08 20:03:25.528553] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:53.590 BaseBdev2 00:07:53.590 20:03:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.590 20:03:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:53.590 20:03:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.590 20:03:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.590 [2024-12-08 20:03:25.538069] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:53.590 [2024-12-08 20:03:25.540149] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:53.590 [2024-12-08 20:03:25.540371] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:53.590 [2024-12-08 20:03:25.540388] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:53.590 [2024-12-08 20:03:25.540633] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:07:53.590 [2024-12-08 20:03:25.540821] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:53.590 [2024-12-08 20:03:25.540832] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:53.590 [2024-12-08 20:03:25.541012] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:53.590 20:03:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.590 20:03:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:07:53.590 20:03:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:53.590 20:03:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:53.590 20:03:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:53.590 20:03:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:53.590 20:03:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:53.590 20:03:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:53.590 20:03:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:53.590 20:03:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:53.590 20:03:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:53.590 20:03:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:53.590 20:03:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:53.590 20:03:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.590 20:03:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.850 20:03:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.850 20:03:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:53.850 "name": "raid_bdev1", 00:07:53.850 "uuid": "6f84917f-74f1-4e29-92d2-d574c3e0261a", 00:07:53.850 "strip_size_kb": 0, 00:07:53.850 "state": "online", 00:07:53.850 "raid_level": "raid1", 00:07:53.850 "superblock": true, 00:07:53.850 "num_base_bdevs": 2, 00:07:53.850 "num_base_bdevs_discovered": 2, 00:07:53.850 "num_base_bdevs_operational": 2, 00:07:53.850 "base_bdevs_list": [ 00:07:53.850 { 00:07:53.850 "name": "BaseBdev1", 00:07:53.850 "uuid": "32c00dae-7b7f-5f45-8690-dcc1480e2b04", 00:07:53.850 "is_configured": true, 00:07:53.850 "data_offset": 2048, 00:07:53.850 "data_size": 63488 00:07:53.850 }, 00:07:53.850 { 00:07:53.850 "name": "BaseBdev2", 00:07:53.850 "uuid": "f9d5c74f-217b-5c6e-8c6f-d3077aea05f2", 00:07:53.850 "is_configured": true, 00:07:53.850 "data_offset": 2048, 00:07:53.850 "data_size": 63488 00:07:53.850 } 00:07:53.850 ] 00:07:53.850 }' 00:07:53.850 20:03:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:53.850 20:03:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.110 20:03:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:54.110 20:03:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:54.369 [2024-12-08 20:03:26.110643] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:07:55.307 20:03:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:07:55.307 20:03:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.307 20:03:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.307 [2024-12-08 20:03:27.026084] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:07:55.307 [2024-12-08 20:03:27.026180] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:55.307 [2024-12-08 20:03:27.026404] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000063c0 00:07:55.307 20:03:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.307 20:03:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:55.307 20:03:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:07:55.307 20:03:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:07:55.307 20:03:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=1 00:07:55.307 20:03:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:07:55.307 20:03:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:55.307 20:03:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:55.307 20:03:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:55.307 20:03:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:55.307 20:03:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:55.307 20:03:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:55.307 20:03:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:55.307 20:03:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:55.307 20:03:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:55.307 20:03:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:55.308 20:03:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:55.308 20:03:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.308 20:03:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.308 20:03:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.308 20:03:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:55.308 "name": "raid_bdev1", 00:07:55.308 "uuid": "6f84917f-74f1-4e29-92d2-d574c3e0261a", 00:07:55.308 "strip_size_kb": 0, 00:07:55.308 "state": "online", 00:07:55.308 "raid_level": "raid1", 00:07:55.308 "superblock": true, 00:07:55.308 "num_base_bdevs": 2, 00:07:55.308 "num_base_bdevs_discovered": 1, 00:07:55.308 "num_base_bdevs_operational": 1, 00:07:55.308 "base_bdevs_list": [ 00:07:55.308 { 00:07:55.308 "name": null, 00:07:55.308 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:55.308 "is_configured": false, 00:07:55.308 "data_offset": 0, 00:07:55.308 "data_size": 63488 00:07:55.308 }, 00:07:55.308 { 00:07:55.308 "name": "BaseBdev2", 00:07:55.308 "uuid": "f9d5c74f-217b-5c6e-8c6f-d3077aea05f2", 00:07:55.308 "is_configured": true, 00:07:55.308 "data_offset": 2048, 00:07:55.308 "data_size": 63488 00:07:55.308 } 00:07:55.308 ] 00:07:55.308 }' 00:07:55.308 20:03:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:55.308 20:03:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.567 20:03:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:55.567 20:03:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.567 20:03:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.567 [2024-12-08 20:03:27.451341] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:55.567 [2024-12-08 20:03:27.451513] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:55.567 [2024-12-08 20:03:27.454159] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:55.568 [2024-12-08 20:03:27.454251] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:55.568 [2024-12-08 20:03:27.454405] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:55.568 [2024-12-08 20:03:27.454474] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:55.568 { 00:07:55.568 "results": [ 00:07:55.568 { 00:07:55.568 "job": "raid_bdev1", 00:07:55.568 "core_mask": "0x1", 00:07:55.568 "workload": "randrw", 00:07:55.568 "percentage": 50, 00:07:55.568 "status": "finished", 00:07:55.568 "queue_depth": 1, 00:07:55.568 "io_size": 131072, 00:07:55.568 "runtime": 1.341313, 00:07:55.568 "iops": 16129.71767216153, 00:07:55.568 "mibps": 2016.2147090201913, 00:07:55.568 "io_failed": 0, 00:07:55.568 "io_timeout": 0, 00:07:55.568 "avg_latency_us": 59.069650967874104, 00:07:55.568 "min_latency_us": 24.258515283842794, 00:07:55.568 "max_latency_us": 1409.4532751091704 00:07:55.568 } 00:07:55.568 ], 00:07:55.568 "core_count": 1 00:07:55.568 } 00:07:55.568 20:03:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.568 20:03:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63518 00:07:55.568 20:03:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 63518 ']' 00:07:55.568 20:03:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 63518 00:07:55.568 20:03:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:07:55.568 20:03:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:55.568 20:03:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63518 00:07:55.568 20:03:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:55.568 killing process with pid 63518 00:07:55.568 20:03:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:55.568 20:03:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63518' 00:07:55.568 20:03:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 63518 00:07:55.568 [2024-12-08 20:03:27.488966] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:55.568 20:03:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 63518 00:07:55.827 [2024-12-08 20:03:27.635866] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:57.217 20:03:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.qkpi9ghZi1 00:07:57.217 20:03:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:57.218 20:03:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:57.218 20:03:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:07:57.218 20:03:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:07:57.218 20:03:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:57.218 20:03:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:07:57.218 20:03:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:07:57.218 00:07:57.218 real 0m4.487s 00:07:57.218 user 0m5.215s 00:07:57.218 sys 0m0.644s 00:07:57.218 20:03:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:57.218 ************************************ 00:07:57.218 END TEST raid_write_error_test 00:07:57.218 ************************************ 00:07:57.218 20:03:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.218 20:03:29 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:07:57.218 20:03:29 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:07:57.218 20:03:29 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:07:57.218 20:03:29 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:57.218 20:03:29 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:57.218 20:03:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:57.218 ************************************ 00:07:57.218 START TEST raid_state_function_test 00:07:57.218 ************************************ 00:07:57.218 20:03:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 3 false 00:07:57.218 20:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:07:57.218 20:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:07:57.218 20:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:07:57.218 20:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:57.218 20:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:57.218 20:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:57.218 20:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:57.218 20:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:57.218 20:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:57.218 20:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:57.218 20:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:57.218 20:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:57.218 20:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:07:57.218 20:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:57.218 20:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:57.218 20:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:07:57.218 20:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:57.218 20:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:57.218 20:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:57.218 20:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:57.218 20:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:57.218 20:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:07:57.218 20:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:57.218 20:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:57.218 20:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:07:57.218 20:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:07:57.218 Process raid pid: 63660 00:07:57.218 20:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=63660 00:07:57.218 20:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 63660' 00:07:57.218 20:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:57.218 20:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 63660 00:07:57.218 20:03:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 63660 ']' 00:07:57.218 20:03:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:57.218 20:03:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:57.218 20:03:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:57.218 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:57.218 20:03:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:57.218 20:03:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.218 [2024-12-08 20:03:29.132520] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:07:57.218 [2024-12-08 20:03:29.132756] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:57.478 [2024-12-08 20:03:29.312125] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.478 [2024-12-08 20:03:29.453056] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.738 [2024-12-08 20:03:29.695878] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:57.738 [2024-12-08 20:03:29.696071] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:57.998 20:03:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:57.998 20:03:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:07:57.998 20:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:07:57.998 20:03:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.998 20:03:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.998 [2024-12-08 20:03:29.942943] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:57.998 [2024-12-08 20:03:29.943049] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:57.998 [2024-12-08 20:03:29.943063] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:57.998 [2024-12-08 20:03:29.943075] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:57.998 [2024-12-08 20:03:29.943084] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:07:57.998 [2024-12-08 20:03:29.943095] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:07:57.998 20:03:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.998 20:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:57.998 20:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:57.998 20:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:57.998 20:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:57.998 20:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:57.998 20:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:57.998 20:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:57.998 20:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:57.998 20:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:57.998 20:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:57.998 20:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:57.999 20:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:57.999 20:03:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.999 20:03:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.999 20:03:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.258 20:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:58.258 "name": "Existed_Raid", 00:07:58.258 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:58.258 "strip_size_kb": 64, 00:07:58.258 "state": "configuring", 00:07:58.258 "raid_level": "raid0", 00:07:58.258 "superblock": false, 00:07:58.258 "num_base_bdevs": 3, 00:07:58.258 "num_base_bdevs_discovered": 0, 00:07:58.258 "num_base_bdevs_operational": 3, 00:07:58.258 "base_bdevs_list": [ 00:07:58.258 { 00:07:58.258 "name": "BaseBdev1", 00:07:58.258 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:58.258 "is_configured": false, 00:07:58.258 "data_offset": 0, 00:07:58.258 "data_size": 0 00:07:58.258 }, 00:07:58.258 { 00:07:58.258 "name": "BaseBdev2", 00:07:58.258 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:58.258 "is_configured": false, 00:07:58.258 "data_offset": 0, 00:07:58.258 "data_size": 0 00:07:58.258 }, 00:07:58.258 { 00:07:58.258 "name": "BaseBdev3", 00:07:58.258 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:58.258 "is_configured": false, 00:07:58.258 "data_offset": 0, 00:07:58.258 "data_size": 0 00:07:58.258 } 00:07:58.258 ] 00:07:58.258 }' 00:07:58.258 20:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:58.258 20:03:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.519 20:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:58.519 20:03:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.519 20:03:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.519 [2024-12-08 20:03:30.342265] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:58.519 [2024-12-08 20:03:30.342428] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:58.519 20:03:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.519 20:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:07:58.519 20:03:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.519 20:03:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.519 [2024-12-08 20:03:30.354194] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:58.519 [2024-12-08 20:03:30.354311] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:58.519 [2024-12-08 20:03:30.354344] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:58.519 [2024-12-08 20:03:30.354371] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:58.519 [2024-12-08 20:03:30.354393] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:07:58.519 [2024-12-08 20:03:30.354421] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:07:58.519 20:03:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.519 20:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:58.519 20:03:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.519 20:03:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.519 [2024-12-08 20:03:30.408355] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:58.519 BaseBdev1 00:07:58.519 20:03:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.519 20:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:58.519 20:03:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:58.519 20:03:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:58.519 20:03:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:58.519 20:03:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:58.519 20:03:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:58.519 20:03:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:58.519 20:03:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.519 20:03:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.519 20:03:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.519 20:03:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:58.519 20:03:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.519 20:03:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.519 [ 00:07:58.519 { 00:07:58.519 "name": "BaseBdev1", 00:07:58.519 "aliases": [ 00:07:58.519 "30054ca2-dcb6-4136-9f67-01414dd4d2f7" 00:07:58.519 ], 00:07:58.519 "product_name": "Malloc disk", 00:07:58.519 "block_size": 512, 00:07:58.519 "num_blocks": 65536, 00:07:58.519 "uuid": "30054ca2-dcb6-4136-9f67-01414dd4d2f7", 00:07:58.519 "assigned_rate_limits": { 00:07:58.519 "rw_ios_per_sec": 0, 00:07:58.519 "rw_mbytes_per_sec": 0, 00:07:58.519 "r_mbytes_per_sec": 0, 00:07:58.519 "w_mbytes_per_sec": 0 00:07:58.519 }, 00:07:58.519 "claimed": true, 00:07:58.519 "claim_type": "exclusive_write", 00:07:58.519 "zoned": false, 00:07:58.519 "supported_io_types": { 00:07:58.519 "read": true, 00:07:58.519 "write": true, 00:07:58.519 "unmap": true, 00:07:58.519 "flush": true, 00:07:58.519 "reset": true, 00:07:58.519 "nvme_admin": false, 00:07:58.519 "nvme_io": false, 00:07:58.519 "nvme_io_md": false, 00:07:58.519 "write_zeroes": true, 00:07:58.519 "zcopy": true, 00:07:58.519 "get_zone_info": false, 00:07:58.519 "zone_management": false, 00:07:58.519 "zone_append": false, 00:07:58.519 "compare": false, 00:07:58.519 "compare_and_write": false, 00:07:58.519 "abort": true, 00:07:58.519 "seek_hole": false, 00:07:58.519 "seek_data": false, 00:07:58.519 "copy": true, 00:07:58.519 "nvme_iov_md": false 00:07:58.519 }, 00:07:58.519 "memory_domains": [ 00:07:58.519 { 00:07:58.519 "dma_device_id": "system", 00:07:58.519 "dma_device_type": 1 00:07:58.519 }, 00:07:58.519 { 00:07:58.519 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:58.519 "dma_device_type": 2 00:07:58.519 } 00:07:58.519 ], 00:07:58.519 "driver_specific": {} 00:07:58.519 } 00:07:58.519 ] 00:07:58.520 20:03:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.520 20:03:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:58.520 20:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:58.520 20:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:58.520 20:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:58.520 20:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:58.520 20:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:58.520 20:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:58.520 20:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:58.520 20:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:58.520 20:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:58.520 20:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:58.520 20:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:58.520 20:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:58.520 20:03:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.520 20:03:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.520 20:03:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.520 20:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:58.520 "name": "Existed_Raid", 00:07:58.520 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:58.520 "strip_size_kb": 64, 00:07:58.520 "state": "configuring", 00:07:58.520 "raid_level": "raid0", 00:07:58.520 "superblock": false, 00:07:58.520 "num_base_bdevs": 3, 00:07:58.520 "num_base_bdevs_discovered": 1, 00:07:58.520 "num_base_bdevs_operational": 3, 00:07:58.520 "base_bdevs_list": [ 00:07:58.520 { 00:07:58.520 "name": "BaseBdev1", 00:07:58.520 "uuid": "30054ca2-dcb6-4136-9f67-01414dd4d2f7", 00:07:58.520 "is_configured": true, 00:07:58.520 "data_offset": 0, 00:07:58.520 "data_size": 65536 00:07:58.520 }, 00:07:58.520 { 00:07:58.520 "name": "BaseBdev2", 00:07:58.520 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:58.520 "is_configured": false, 00:07:58.520 "data_offset": 0, 00:07:58.520 "data_size": 0 00:07:58.520 }, 00:07:58.520 { 00:07:58.520 "name": "BaseBdev3", 00:07:58.520 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:58.520 "is_configured": false, 00:07:58.520 "data_offset": 0, 00:07:58.520 "data_size": 0 00:07:58.520 } 00:07:58.520 ] 00:07:58.520 }' 00:07:58.520 20:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:58.780 20:03:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.040 20:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:59.040 20:03:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.040 20:03:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.040 [2024-12-08 20:03:30.883702] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:59.040 [2024-12-08 20:03:30.883793] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:59.040 20:03:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.040 20:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:07:59.040 20:03:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.040 20:03:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.040 [2024-12-08 20:03:30.895776] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:59.040 [2024-12-08 20:03:30.898282] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:59.040 [2024-12-08 20:03:30.898391] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:59.040 [2024-12-08 20:03:30.898427] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:07:59.040 [2024-12-08 20:03:30.898457] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:07:59.040 20:03:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.040 20:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:59.040 20:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:59.040 20:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:59.040 20:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:59.040 20:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:59.040 20:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:59.040 20:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:59.040 20:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:59.040 20:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:59.040 20:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:59.040 20:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:59.040 20:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:59.040 20:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:59.040 20:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:59.040 20:03:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.040 20:03:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.040 20:03:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.040 20:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:59.040 "name": "Existed_Raid", 00:07:59.040 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:59.040 "strip_size_kb": 64, 00:07:59.040 "state": "configuring", 00:07:59.040 "raid_level": "raid0", 00:07:59.040 "superblock": false, 00:07:59.040 "num_base_bdevs": 3, 00:07:59.040 "num_base_bdevs_discovered": 1, 00:07:59.040 "num_base_bdevs_operational": 3, 00:07:59.040 "base_bdevs_list": [ 00:07:59.040 { 00:07:59.040 "name": "BaseBdev1", 00:07:59.040 "uuid": "30054ca2-dcb6-4136-9f67-01414dd4d2f7", 00:07:59.040 "is_configured": true, 00:07:59.040 "data_offset": 0, 00:07:59.040 "data_size": 65536 00:07:59.040 }, 00:07:59.040 { 00:07:59.040 "name": "BaseBdev2", 00:07:59.040 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:59.040 "is_configured": false, 00:07:59.040 "data_offset": 0, 00:07:59.040 "data_size": 0 00:07:59.040 }, 00:07:59.040 { 00:07:59.040 "name": "BaseBdev3", 00:07:59.040 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:59.040 "is_configured": false, 00:07:59.040 "data_offset": 0, 00:07:59.040 "data_size": 0 00:07:59.040 } 00:07:59.040 ] 00:07:59.040 }' 00:07:59.040 20:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:59.040 20:03:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.610 20:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:59.610 20:03:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.610 20:03:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.610 [2024-12-08 20:03:31.432364] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:59.610 BaseBdev2 00:07:59.610 20:03:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.610 20:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:59.610 20:03:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:59.610 20:03:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:59.610 20:03:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:59.610 20:03:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:59.610 20:03:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:59.610 20:03:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:59.610 20:03:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.610 20:03:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.610 20:03:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.610 20:03:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:59.610 20:03:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.610 20:03:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.610 [ 00:07:59.610 { 00:07:59.610 "name": "BaseBdev2", 00:07:59.610 "aliases": [ 00:07:59.610 "d064dd9c-ae91-4245-8b3e-a0c732cfe569" 00:07:59.610 ], 00:07:59.610 "product_name": "Malloc disk", 00:07:59.610 "block_size": 512, 00:07:59.610 "num_blocks": 65536, 00:07:59.610 "uuid": "d064dd9c-ae91-4245-8b3e-a0c732cfe569", 00:07:59.610 "assigned_rate_limits": { 00:07:59.610 "rw_ios_per_sec": 0, 00:07:59.610 "rw_mbytes_per_sec": 0, 00:07:59.610 "r_mbytes_per_sec": 0, 00:07:59.610 "w_mbytes_per_sec": 0 00:07:59.610 }, 00:07:59.610 "claimed": true, 00:07:59.610 "claim_type": "exclusive_write", 00:07:59.610 "zoned": false, 00:07:59.610 "supported_io_types": { 00:07:59.610 "read": true, 00:07:59.610 "write": true, 00:07:59.610 "unmap": true, 00:07:59.610 "flush": true, 00:07:59.610 "reset": true, 00:07:59.610 "nvme_admin": false, 00:07:59.610 "nvme_io": false, 00:07:59.610 "nvme_io_md": false, 00:07:59.610 "write_zeroes": true, 00:07:59.610 "zcopy": true, 00:07:59.610 "get_zone_info": false, 00:07:59.610 "zone_management": false, 00:07:59.610 "zone_append": false, 00:07:59.610 "compare": false, 00:07:59.610 "compare_and_write": false, 00:07:59.610 "abort": true, 00:07:59.610 "seek_hole": false, 00:07:59.610 "seek_data": false, 00:07:59.610 "copy": true, 00:07:59.610 "nvme_iov_md": false 00:07:59.610 }, 00:07:59.610 "memory_domains": [ 00:07:59.610 { 00:07:59.610 "dma_device_id": "system", 00:07:59.610 "dma_device_type": 1 00:07:59.610 }, 00:07:59.610 { 00:07:59.610 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:59.610 "dma_device_type": 2 00:07:59.610 } 00:07:59.610 ], 00:07:59.610 "driver_specific": {} 00:07:59.610 } 00:07:59.610 ] 00:07:59.610 20:03:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.610 20:03:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:59.610 20:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:59.610 20:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:59.610 20:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:59.610 20:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:59.610 20:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:59.610 20:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:59.610 20:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:59.610 20:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:59.610 20:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:59.610 20:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:59.610 20:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:59.610 20:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:59.610 20:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:59.610 20:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:59.610 20:03:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.610 20:03:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.610 20:03:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.610 20:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:59.610 "name": "Existed_Raid", 00:07:59.610 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:59.610 "strip_size_kb": 64, 00:07:59.610 "state": "configuring", 00:07:59.610 "raid_level": "raid0", 00:07:59.610 "superblock": false, 00:07:59.610 "num_base_bdevs": 3, 00:07:59.610 "num_base_bdevs_discovered": 2, 00:07:59.610 "num_base_bdevs_operational": 3, 00:07:59.610 "base_bdevs_list": [ 00:07:59.610 { 00:07:59.610 "name": "BaseBdev1", 00:07:59.610 "uuid": "30054ca2-dcb6-4136-9f67-01414dd4d2f7", 00:07:59.610 "is_configured": true, 00:07:59.610 "data_offset": 0, 00:07:59.610 "data_size": 65536 00:07:59.610 }, 00:07:59.610 { 00:07:59.610 "name": "BaseBdev2", 00:07:59.610 "uuid": "d064dd9c-ae91-4245-8b3e-a0c732cfe569", 00:07:59.610 "is_configured": true, 00:07:59.610 "data_offset": 0, 00:07:59.610 "data_size": 65536 00:07:59.610 }, 00:07:59.610 { 00:07:59.610 "name": "BaseBdev3", 00:07:59.610 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:59.610 "is_configured": false, 00:07:59.610 "data_offset": 0, 00:07:59.610 "data_size": 0 00:07:59.610 } 00:07:59.610 ] 00:07:59.610 }' 00:07:59.610 20:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:59.610 20:03:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.180 20:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:00.180 20:03:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.180 20:03:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.180 [2024-12-08 20:03:31.910115] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:00.180 [2024-12-08 20:03:31.910172] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:00.180 [2024-12-08 20:03:31.910189] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:00.180 [2024-12-08 20:03:31.910495] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:00.180 [2024-12-08 20:03:31.910703] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:00.180 [2024-12-08 20:03:31.910715] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:00.180 [2024-12-08 20:03:31.911034] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:00.180 BaseBdev3 00:08:00.180 20:03:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.180 20:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:00.180 20:03:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:00.180 20:03:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:00.180 20:03:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:00.180 20:03:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:00.180 20:03:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:00.180 20:03:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:00.180 20:03:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.180 20:03:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.180 20:03:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.180 20:03:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:00.180 20:03:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.180 20:03:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.180 [ 00:08:00.180 { 00:08:00.180 "name": "BaseBdev3", 00:08:00.180 "aliases": [ 00:08:00.180 "e17d5cd9-c8b0-4929-a9a5-9ae9a2b86040" 00:08:00.180 ], 00:08:00.180 "product_name": "Malloc disk", 00:08:00.180 "block_size": 512, 00:08:00.180 "num_blocks": 65536, 00:08:00.180 "uuid": "e17d5cd9-c8b0-4929-a9a5-9ae9a2b86040", 00:08:00.180 "assigned_rate_limits": { 00:08:00.180 "rw_ios_per_sec": 0, 00:08:00.180 "rw_mbytes_per_sec": 0, 00:08:00.180 "r_mbytes_per_sec": 0, 00:08:00.180 "w_mbytes_per_sec": 0 00:08:00.180 }, 00:08:00.180 "claimed": true, 00:08:00.180 "claim_type": "exclusive_write", 00:08:00.180 "zoned": false, 00:08:00.180 "supported_io_types": { 00:08:00.180 "read": true, 00:08:00.180 "write": true, 00:08:00.180 "unmap": true, 00:08:00.180 "flush": true, 00:08:00.180 "reset": true, 00:08:00.180 "nvme_admin": false, 00:08:00.180 "nvme_io": false, 00:08:00.180 "nvme_io_md": false, 00:08:00.180 "write_zeroes": true, 00:08:00.180 "zcopy": true, 00:08:00.180 "get_zone_info": false, 00:08:00.180 "zone_management": false, 00:08:00.180 "zone_append": false, 00:08:00.180 "compare": false, 00:08:00.180 "compare_and_write": false, 00:08:00.180 "abort": true, 00:08:00.180 "seek_hole": false, 00:08:00.180 "seek_data": false, 00:08:00.180 "copy": true, 00:08:00.180 "nvme_iov_md": false 00:08:00.180 }, 00:08:00.180 "memory_domains": [ 00:08:00.180 { 00:08:00.180 "dma_device_id": "system", 00:08:00.180 "dma_device_type": 1 00:08:00.180 }, 00:08:00.180 { 00:08:00.180 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:00.180 "dma_device_type": 2 00:08:00.180 } 00:08:00.180 ], 00:08:00.180 "driver_specific": {} 00:08:00.180 } 00:08:00.180 ] 00:08:00.180 20:03:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.180 20:03:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:00.180 20:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:00.180 20:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:00.180 20:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:00.180 20:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:00.180 20:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:00.180 20:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:00.180 20:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:00.180 20:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:00.180 20:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:00.180 20:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:00.180 20:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:00.180 20:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:00.180 20:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:00.180 20:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:00.180 20:03:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.180 20:03:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.180 20:03:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.180 20:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:00.180 "name": "Existed_Raid", 00:08:00.180 "uuid": "6ccb079a-2344-439c-8c56-f4e27f012161", 00:08:00.180 "strip_size_kb": 64, 00:08:00.180 "state": "online", 00:08:00.180 "raid_level": "raid0", 00:08:00.180 "superblock": false, 00:08:00.180 "num_base_bdevs": 3, 00:08:00.180 "num_base_bdevs_discovered": 3, 00:08:00.180 "num_base_bdevs_operational": 3, 00:08:00.180 "base_bdevs_list": [ 00:08:00.180 { 00:08:00.180 "name": "BaseBdev1", 00:08:00.180 "uuid": "30054ca2-dcb6-4136-9f67-01414dd4d2f7", 00:08:00.180 "is_configured": true, 00:08:00.180 "data_offset": 0, 00:08:00.181 "data_size": 65536 00:08:00.181 }, 00:08:00.181 { 00:08:00.181 "name": "BaseBdev2", 00:08:00.181 "uuid": "d064dd9c-ae91-4245-8b3e-a0c732cfe569", 00:08:00.181 "is_configured": true, 00:08:00.181 "data_offset": 0, 00:08:00.181 "data_size": 65536 00:08:00.181 }, 00:08:00.181 { 00:08:00.181 "name": "BaseBdev3", 00:08:00.181 "uuid": "e17d5cd9-c8b0-4929-a9a5-9ae9a2b86040", 00:08:00.181 "is_configured": true, 00:08:00.181 "data_offset": 0, 00:08:00.181 "data_size": 65536 00:08:00.181 } 00:08:00.181 ] 00:08:00.181 }' 00:08:00.181 20:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:00.181 20:03:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.440 20:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:00.441 20:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:00.441 20:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:00.441 20:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:00.441 20:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:00.441 20:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:00.441 20:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:00.441 20:03:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.441 20:03:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.441 20:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:00.441 [2024-12-08 20:03:32.397750] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:00.441 20:03:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.701 20:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:00.701 "name": "Existed_Raid", 00:08:00.701 "aliases": [ 00:08:00.701 "6ccb079a-2344-439c-8c56-f4e27f012161" 00:08:00.701 ], 00:08:00.701 "product_name": "Raid Volume", 00:08:00.701 "block_size": 512, 00:08:00.701 "num_blocks": 196608, 00:08:00.701 "uuid": "6ccb079a-2344-439c-8c56-f4e27f012161", 00:08:00.701 "assigned_rate_limits": { 00:08:00.701 "rw_ios_per_sec": 0, 00:08:00.701 "rw_mbytes_per_sec": 0, 00:08:00.701 "r_mbytes_per_sec": 0, 00:08:00.701 "w_mbytes_per_sec": 0 00:08:00.701 }, 00:08:00.701 "claimed": false, 00:08:00.701 "zoned": false, 00:08:00.701 "supported_io_types": { 00:08:00.701 "read": true, 00:08:00.701 "write": true, 00:08:00.701 "unmap": true, 00:08:00.701 "flush": true, 00:08:00.701 "reset": true, 00:08:00.701 "nvme_admin": false, 00:08:00.701 "nvme_io": false, 00:08:00.701 "nvme_io_md": false, 00:08:00.701 "write_zeroes": true, 00:08:00.701 "zcopy": false, 00:08:00.701 "get_zone_info": false, 00:08:00.701 "zone_management": false, 00:08:00.701 "zone_append": false, 00:08:00.701 "compare": false, 00:08:00.701 "compare_and_write": false, 00:08:00.701 "abort": false, 00:08:00.701 "seek_hole": false, 00:08:00.701 "seek_data": false, 00:08:00.701 "copy": false, 00:08:00.701 "nvme_iov_md": false 00:08:00.701 }, 00:08:00.701 "memory_domains": [ 00:08:00.701 { 00:08:00.701 "dma_device_id": "system", 00:08:00.701 "dma_device_type": 1 00:08:00.701 }, 00:08:00.701 { 00:08:00.701 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:00.701 "dma_device_type": 2 00:08:00.701 }, 00:08:00.701 { 00:08:00.701 "dma_device_id": "system", 00:08:00.701 "dma_device_type": 1 00:08:00.701 }, 00:08:00.701 { 00:08:00.701 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:00.701 "dma_device_type": 2 00:08:00.701 }, 00:08:00.701 { 00:08:00.701 "dma_device_id": "system", 00:08:00.701 "dma_device_type": 1 00:08:00.701 }, 00:08:00.701 { 00:08:00.701 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:00.701 "dma_device_type": 2 00:08:00.701 } 00:08:00.701 ], 00:08:00.701 "driver_specific": { 00:08:00.701 "raid": { 00:08:00.701 "uuid": "6ccb079a-2344-439c-8c56-f4e27f012161", 00:08:00.701 "strip_size_kb": 64, 00:08:00.701 "state": "online", 00:08:00.701 "raid_level": "raid0", 00:08:00.701 "superblock": false, 00:08:00.701 "num_base_bdevs": 3, 00:08:00.701 "num_base_bdevs_discovered": 3, 00:08:00.701 "num_base_bdevs_operational": 3, 00:08:00.701 "base_bdevs_list": [ 00:08:00.701 { 00:08:00.701 "name": "BaseBdev1", 00:08:00.701 "uuid": "30054ca2-dcb6-4136-9f67-01414dd4d2f7", 00:08:00.701 "is_configured": true, 00:08:00.701 "data_offset": 0, 00:08:00.701 "data_size": 65536 00:08:00.701 }, 00:08:00.701 { 00:08:00.701 "name": "BaseBdev2", 00:08:00.701 "uuid": "d064dd9c-ae91-4245-8b3e-a0c732cfe569", 00:08:00.701 "is_configured": true, 00:08:00.701 "data_offset": 0, 00:08:00.701 "data_size": 65536 00:08:00.701 }, 00:08:00.701 { 00:08:00.701 "name": "BaseBdev3", 00:08:00.701 "uuid": "e17d5cd9-c8b0-4929-a9a5-9ae9a2b86040", 00:08:00.701 "is_configured": true, 00:08:00.701 "data_offset": 0, 00:08:00.701 "data_size": 65536 00:08:00.701 } 00:08:00.701 ] 00:08:00.701 } 00:08:00.701 } 00:08:00.701 }' 00:08:00.701 20:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:00.701 20:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:00.701 BaseBdev2 00:08:00.701 BaseBdev3' 00:08:00.701 20:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:00.701 20:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:00.701 20:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:00.701 20:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:00.701 20:03:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.701 20:03:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.701 20:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:00.701 20:03:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.701 20:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:00.701 20:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:00.701 20:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:00.701 20:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:00.701 20:03:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.701 20:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:00.701 20:03:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.701 20:03:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.701 20:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:00.702 20:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:00.702 20:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:00.702 20:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:00.702 20:03:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.702 20:03:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.702 20:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:00.702 20:03:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.962 20:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:00.962 20:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:00.962 20:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:00.962 20:03:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.962 20:03:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.962 [2024-12-08 20:03:32.684917] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:00.962 [2024-12-08 20:03:32.685064] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:00.962 [2024-12-08 20:03:32.685143] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:00.962 20:03:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.962 20:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:00.962 20:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:08:00.962 20:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:00.962 20:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:00.962 20:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:00.962 20:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:08:00.962 20:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:00.962 20:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:00.962 20:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:00.962 20:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:00.962 20:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:00.962 20:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:00.962 20:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:00.962 20:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:00.962 20:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:00.962 20:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:00.962 20:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:00.962 20:03:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.962 20:03:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.963 20:03:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.963 20:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:00.963 "name": "Existed_Raid", 00:08:00.963 "uuid": "6ccb079a-2344-439c-8c56-f4e27f012161", 00:08:00.963 "strip_size_kb": 64, 00:08:00.963 "state": "offline", 00:08:00.963 "raid_level": "raid0", 00:08:00.963 "superblock": false, 00:08:00.963 "num_base_bdevs": 3, 00:08:00.963 "num_base_bdevs_discovered": 2, 00:08:00.963 "num_base_bdevs_operational": 2, 00:08:00.963 "base_bdevs_list": [ 00:08:00.963 { 00:08:00.963 "name": null, 00:08:00.963 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:00.963 "is_configured": false, 00:08:00.963 "data_offset": 0, 00:08:00.963 "data_size": 65536 00:08:00.963 }, 00:08:00.963 { 00:08:00.963 "name": "BaseBdev2", 00:08:00.963 "uuid": "d064dd9c-ae91-4245-8b3e-a0c732cfe569", 00:08:00.963 "is_configured": true, 00:08:00.963 "data_offset": 0, 00:08:00.963 "data_size": 65536 00:08:00.963 }, 00:08:00.963 { 00:08:00.963 "name": "BaseBdev3", 00:08:00.963 "uuid": "e17d5cd9-c8b0-4929-a9a5-9ae9a2b86040", 00:08:00.963 "is_configured": true, 00:08:00.963 "data_offset": 0, 00:08:00.963 "data_size": 65536 00:08:00.963 } 00:08:00.963 ] 00:08:00.963 }' 00:08:00.963 20:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:00.963 20:03:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.533 20:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:01.533 20:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:01.533 20:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:01.533 20:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:01.533 20:03:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.533 20:03:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.533 20:03:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.533 20:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:01.533 20:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:01.533 20:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:01.533 20:03:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.533 20:03:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.533 [2024-12-08 20:03:33.274188] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:01.533 20:03:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.533 20:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:01.533 20:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:01.533 20:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:01.533 20:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:01.533 20:03:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.533 20:03:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.533 20:03:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.533 20:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:01.533 20:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:01.533 20:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:01.533 20:03:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.533 20:03:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.533 [2024-12-08 20:03:33.434318] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:01.533 [2024-12-08 20:03:33.434455] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:01.794 20:03:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.794 20:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:01.794 20:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:01.794 20:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:01.794 20:03:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.794 20:03:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.794 20:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:01.794 20:03:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.794 20:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:01.794 20:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:01.794 20:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:01.794 20:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:01.794 20:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:01.794 20:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:01.794 20:03:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.794 20:03:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.794 BaseBdev2 00:08:01.794 20:03:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.794 20:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:01.794 20:03:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:01.794 20:03:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:01.794 20:03:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:01.794 20:03:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:01.794 20:03:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:01.794 20:03:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:01.794 20:03:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.794 20:03:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.794 20:03:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.794 20:03:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:01.794 20:03:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.794 20:03:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.794 [ 00:08:01.794 { 00:08:01.794 "name": "BaseBdev2", 00:08:01.794 "aliases": [ 00:08:01.794 "f352a1e1-a481-4338-a460-ecf488934abc" 00:08:01.794 ], 00:08:01.794 "product_name": "Malloc disk", 00:08:01.794 "block_size": 512, 00:08:01.794 "num_blocks": 65536, 00:08:01.794 "uuid": "f352a1e1-a481-4338-a460-ecf488934abc", 00:08:01.794 "assigned_rate_limits": { 00:08:01.794 "rw_ios_per_sec": 0, 00:08:01.794 "rw_mbytes_per_sec": 0, 00:08:01.794 "r_mbytes_per_sec": 0, 00:08:01.794 "w_mbytes_per_sec": 0 00:08:01.794 }, 00:08:01.794 "claimed": false, 00:08:01.794 "zoned": false, 00:08:01.794 "supported_io_types": { 00:08:01.794 "read": true, 00:08:01.794 "write": true, 00:08:01.794 "unmap": true, 00:08:01.794 "flush": true, 00:08:01.794 "reset": true, 00:08:01.794 "nvme_admin": false, 00:08:01.794 "nvme_io": false, 00:08:01.794 "nvme_io_md": false, 00:08:01.794 "write_zeroes": true, 00:08:01.794 "zcopy": true, 00:08:01.794 "get_zone_info": false, 00:08:01.794 "zone_management": false, 00:08:01.794 "zone_append": false, 00:08:01.794 "compare": false, 00:08:01.794 "compare_and_write": false, 00:08:01.794 "abort": true, 00:08:01.794 "seek_hole": false, 00:08:01.794 "seek_data": false, 00:08:01.794 "copy": true, 00:08:01.794 "nvme_iov_md": false 00:08:01.794 }, 00:08:01.794 "memory_domains": [ 00:08:01.794 { 00:08:01.794 "dma_device_id": "system", 00:08:01.794 "dma_device_type": 1 00:08:01.794 }, 00:08:01.794 { 00:08:01.794 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:01.794 "dma_device_type": 2 00:08:01.794 } 00:08:01.794 ], 00:08:01.794 "driver_specific": {} 00:08:01.794 } 00:08:01.794 ] 00:08:01.794 20:03:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.794 20:03:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:01.794 20:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:01.794 20:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:01.794 20:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:01.794 20:03:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.794 20:03:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.794 BaseBdev3 00:08:01.794 20:03:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.794 20:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:01.794 20:03:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:01.794 20:03:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:01.794 20:03:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:01.794 20:03:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:01.794 20:03:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:01.795 20:03:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:01.795 20:03:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.795 20:03:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.795 20:03:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.795 20:03:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:01.795 20:03:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.795 20:03:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.795 [ 00:08:01.795 { 00:08:01.795 "name": "BaseBdev3", 00:08:01.795 "aliases": [ 00:08:01.795 "80f25461-ea9f-46d6-9a95-44a1c91ea4f4" 00:08:01.795 ], 00:08:01.795 "product_name": "Malloc disk", 00:08:01.795 "block_size": 512, 00:08:01.795 "num_blocks": 65536, 00:08:01.795 "uuid": "80f25461-ea9f-46d6-9a95-44a1c91ea4f4", 00:08:01.795 "assigned_rate_limits": { 00:08:01.795 "rw_ios_per_sec": 0, 00:08:01.795 "rw_mbytes_per_sec": 0, 00:08:01.795 "r_mbytes_per_sec": 0, 00:08:01.795 "w_mbytes_per_sec": 0 00:08:01.795 }, 00:08:01.795 "claimed": false, 00:08:01.795 "zoned": false, 00:08:01.795 "supported_io_types": { 00:08:01.795 "read": true, 00:08:01.795 "write": true, 00:08:01.795 "unmap": true, 00:08:01.795 "flush": true, 00:08:01.795 "reset": true, 00:08:01.795 "nvme_admin": false, 00:08:01.795 "nvme_io": false, 00:08:01.795 "nvme_io_md": false, 00:08:01.795 "write_zeroes": true, 00:08:01.795 "zcopy": true, 00:08:01.795 "get_zone_info": false, 00:08:01.795 "zone_management": false, 00:08:01.795 "zone_append": false, 00:08:01.795 "compare": false, 00:08:01.795 "compare_and_write": false, 00:08:01.795 "abort": true, 00:08:01.795 "seek_hole": false, 00:08:01.795 "seek_data": false, 00:08:01.795 "copy": true, 00:08:01.795 "nvme_iov_md": false 00:08:01.795 }, 00:08:01.795 "memory_domains": [ 00:08:01.795 { 00:08:01.795 "dma_device_id": "system", 00:08:01.795 "dma_device_type": 1 00:08:01.795 }, 00:08:01.795 { 00:08:01.795 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:01.795 "dma_device_type": 2 00:08:01.795 } 00:08:01.795 ], 00:08:01.795 "driver_specific": {} 00:08:01.795 } 00:08:01.795 ] 00:08:01.795 20:03:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.795 20:03:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:01.795 20:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:01.795 20:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:01.795 20:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:01.795 20:03:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.795 20:03:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.795 [2024-12-08 20:03:33.767307] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:01.795 [2024-12-08 20:03:33.767457] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:01.795 [2024-12-08 20:03:33.767512] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:01.795 [2024-12-08 20:03:33.769810] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:02.055 20:03:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.055 20:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:02.055 20:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:02.055 20:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:02.055 20:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:02.055 20:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:02.055 20:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:02.055 20:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:02.055 20:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:02.055 20:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:02.055 20:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:02.055 20:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:02.055 20:03:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.055 20:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:02.055 20:03:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.055 20:03:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.055 20:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:02.055 "name": "Existed_Raid", 00:08:02.055 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:02.055 "strip_size_kb": 64, 00:08:02.055 "state": "configuring", 00:08:02.055 "raid_level": "raid0", 00:08:02.055 "superblock": false, 00:08:02.055 "num_base_bdevs": 3, 00:08:02.055 "num_base_bdevs_discovered": 2, 00:08:02.055 "num_base_bdevs_operational": 3, 00:08:02.055 "base_bdevs_list": [ 00:08:02.055 { 00:08:02.055 "name": "BaseBdev1", 00:08:02.055 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:02.055 "is_configured": false, 00:08:02.055 "data_offset": 0, 00:08:02.055 "data_size": 0 00:08:02.055 }, 00:08:02.055 { 00:08:02.055 "name": "BaseBdev2", 00:08:02.055 "uuid": "f352a1e1-a481-4338-a460-ecf488934abc", 00:08:02.055 "is_configured": true, 00:08:02.055 "data_offset": 0, 00:08:02.055 "data_size": 65536 00:08:02.055 }, 00:08:02.055 { 00:08:02.055 "name": "BaseBdev3", 00:08:02.055 "uuid": "80f25461-ea9f-46d6-9a95-44a1c91ea4f4", 00:08:02.055 "is_configured": true, 00:08:02.055 "data_offset": 0, 00:08:02.055 "data_size": 65536 00:08:02.055 } 00:08:02.055 ] 00:08:02.055 }' 00:08:02.055 20:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:02.055 20:03:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.315 20:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:02.315 20:03:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.315 20:03:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.315 [2024-12-08 20:03:34.242592] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:02.315 20:03:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.315 20:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:02.315 20:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:02.315 20:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:02.315 20:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:02.315 20:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:02.315 20:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:02.315 20:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:02.315 20:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:02.315 20:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:02.315 20:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:02.315 20:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:02.315 20:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:02.315 20:03:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.315 20:03:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.315 20:03:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.575 20:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:02.575 "name": "Existed_Raid", 00:08:02.575 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:02.575 "strip_size_kb": 64, 00:08:02.575 "state": "configuring", 00:08:02.575 "raid_level": "raid0", 00:08:02.575 "superblock": false, 00:08:02.575 "num_base_bdevs": 3, 00:08:02.575 "num_base_bdevs_discovered": 1, 00:08:02.575 "num_base_bdevs_operational": 3, 00:08:02.575 "base_bdevs_list": [ 00:08:02.575 { 00:08:02.575 "name": "BaseBdev1", 00:08:02.575 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:02.575 "is_configured": false, 00:08:02.575 "data_offset": 0, 00:08:02.575 "data_size": 0 00:08:02.575 }, 00:08:02.575 { 00:08:02.575 "name": null, 00:08:02.575 "uuid": "f352a1e1-a481-4338-a460-ecf488934abc", 00:08:02.575 "is_configured": false, 00:08:02.575 "data_offset": 0, 00:08:02.575 "data_size": 65536 00:08:02.575 }, 00:08:02.575 { 00:08:02.575 "name": "BaseBdev3", 00:08:02.575 "uuid": "80f25461-ea9f-46d6-9a95-44a1c91ea4f4", 00:08:02.575 "is_configured": true, 00:08:02.575 "data_offset": 0, 00:08:02.575 "data_size": 65536 00:08:02.575 } 00:08:02.575 ] 00:08:02.575 }' 00:08:02.575 20:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:02.575 20:03:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.836 20:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:02.836 20:03:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.836 20:03:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.836 20:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:02.836 20:03:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.836 20:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:02.836 20:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:02.836 20:03:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.836 20:03:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.836 [2024-12-08 20:03:34.785052] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:02.836 BaseBdev1 00:08:02.836 20:03:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.836 20:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:02.836 20:03:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:02.836 20:03:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:02.836 20:03:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:02.836 20:03:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:02.836 20:03:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:02.836 20:03:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:02.836 20:03:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.836 20:03:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.836 20:03:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.836 20:03:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:02.836 20:03:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.836 20:03:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.836 [ 00:08:02.836 { 00:08:02.836 "name": "BaseBdev1", 00:08:02.836 "aliases": [ 00:08:02.836 "f52c602c-d42c-4604-840d-245ec0ae8d57" 00:08:02.836 ], 00:08:02.836 "product_name": "Malloc disk", 00:08:02.836 "block_size": 512, 00:08:02.836 "num_blocks": 65536, 00:08:02.836 "uuid": "f52c602c-d42c-4604-840d-245ec0ae8d57", 00:08:02.836 "assigned_rate_limits": { 00:08:03.096 "rw_ios_per_sec": 0, 00:08:03.096 "rw_mbytes_per_sec": 0, 00:08:03.096 "r_mbytes_per_sec": 0, 00:08:03.096 "w_mbytes_per_sec": 0 00:08:03.096 }, 00:08:03.096 "claimed": true, 00:08:03.096 "claim_type": "exclusive_write", 00:08:03.096 "zoned": false, 00:08:03.096 "supported_io_types": { 00:08:03.096 "read": true, 00:08:03.096 "write": true, 00:08:03.096 "unmap": true, 00:08:03.096 "flush": true, 00:08:03.096 "reset": true, 00:08:03.096 "nvme_admin": false, 00:08:03.096 "nvme_io": false, 00:08:03.096 "nvme_io_md": false, 00:08:03.096 "write_zeroes": true, 00:08:03.096 "zcopy": true, 00:08:03.096 "get_zone_info": false, 00:08:03.096 "zone_management": false, 00:08:03.096 "zone_append": false, 00:08:03.096 "compare": false, 00:08:03.096 "compare_and_write": false, 00:08:03.096 "abort": true, 00:08:03.096 "seek_hole": false, 00:08:03.096 "seek_data": false, 00:08:03.096 "copy": true, 00:08:03.096 "nvme_iov_md": false 00:08:03.096 }, 00:08:03.096 "memory_domains": [ 00:08:03.096 { 00:08:03.096 "dma_device_id": "system", 00:08:03.096 "dma_device_type": 1 00:08:03.096 }, 00:08:03.096 { 00:08:03.096 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:03.096 "dma_device_type": 2 00:08:03.096 } 00:08:03.096 ], 00:08:03.096 "driver_specific": {} 00:08:03.096 } 00:08:03.096 ] 00:08:03.096 20:03:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.096 20:03:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:03.096 20:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:03.096 20:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:03.096 20:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:03.096 20:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:03.096 20:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:03.096 20:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:03.096 20:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:03.096 20:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:03.096 20:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:03.096 20:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:03.096 20:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:03.096 20:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:03.096 20:03:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.096 20:03:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.096 20:03:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.096 20:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:03.096 "name": "Existed_Raid", 00:08:03.096 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:03.096 "strip_size_kb": 64, 00:08:03.096 "state": "configuring", 00:08:03.096 "raid_level": "raid0", 00:08:03.096 "superblock": false, 00:08:03.096 "num_base_bdevs": 3, 00:08:03.096 "num_base_bdevs_discovered": 2, 00:08:03.096 "num_base_bdevs_operational": 3, 00:08:03.096 "base_bdevs_list": [ 00:08:03.096 { 00:08:03.096 "name": "BaseBdev1", 00:08:03.096 "uuid": "f52c602c-d42c-4604-840d-245ec0ae8d57", 00:08:03.096 "is_configured": true, 00:08:03.096 "data_offset": 0, 00:08:03.096 "data_size": 65536 00:08:03.096 }, 00:08:03.096 { 00:08:03.096 "name": null, 00:08:03.096 "uuid": "f352a1e1-a481-4338-a460-ecf488934abc", 00:08:03.096 "is_configured": false, 00:08:03.096 "data_offset": 0, 00:08:03.096 "data_size": 65536 00:08:03.097 }, 00:08:03.097 { 00:08:03.097 "name": "BaseBdev3", 00:08:03.097 "uuid": "80f25461-ea9f-46d6-9a95-44a1c91ea4f4", 00:08:03.097 "is_configured": true, 00:08:03.097 "data_offset": 0, 00:08:03.097 "data_size": 65536 00:08:03.097 } 00:08:03.097 ] 00:08:03.097 }' 00:08:03.097 20:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:03.097 20:03:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.363 20:03:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:03.363 20:03:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:03.363 20:03:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.363 20:03:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.363 20:03:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.624 20:03:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:03.624 20:03:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:03.624 20:03:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.624 20:03:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.624 [2024-12-08 20:03:35.348171] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:03.624 20:03:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.624 20:03:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:03.624 20:03:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:03.624 20:03:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:03.624 20:03:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:03.624 20:03:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:03.624 20:03:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:03.624 20:03:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:03.624 20:03:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:03.624 20:03:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:03.624 20:03:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:03.624 20:03:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:03.624 20:03:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.624 20:03:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.624 20:03:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:03.624 20:03:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.624 20:03:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:03.624 "name": "Existed_Raid", 00:08:03.624 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:03.624 "strip_size_kb": 64, 00:08:03.624 "state": "configuring", 00:08:03.624 "raid_level": "raid0", 00:08:03.624 "superblock": false, 00:08:03.624 "num_base_bdevs": 3, 00:08:03.624 "num_base_bdevs_discovered": 1, 00:08:03.624 "num_base_bdevs_operational": 3, 00:08:03.624 "base_bdevs_list": [ 00:08:03.624 { 00:08:03.624 "name": "BaseBdev1", 00:08:03.624 "uuid": "f52c602c-d42c-4604-840d-245ec0ae8d57", 00:08:03.624 "is_configured": true, 00:08:03.625 "data_offset": 0, 00:08:03.625 "data_size": 65536 00:08:03.625 }, 00:08:03.625 { 00:08:03.625 "name": null, 00:08:03.625 "uuid": "f352a1e1-a481-4338-a460-ecf488934abc", 00:08:03.625 "is_configured": false, 00:08:03.625 "data_offset": 0, 00:08:03.625 "data_size": 65536 00:08:03.625 }, 00:08:03.625 { 00:08:03.625 "name": null, 00:08:03.625 "uuid": "80f25461-ea9f-46d6-9a95-44a1c91ea4f4", 00:08:03.625 "is_configured": false, 00:08:03.625 "data_offset": 0, 00:08:03.625 "data_size": 65536 00:08:03.625 } 00:08:03.625 ] 00:08:03.625 }' 00:08:03.625 20:03:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:03.625 20:03:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.883 20:03:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:03.883 20:03:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:03.883 20:03:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.883 20:03:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.883 20:03:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.883 20:03:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:03.883 20:03:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:03.883 20:03:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.883 20:03:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.883 [2024-12-08 20:03:35.815514] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:03.883 20:03:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.883 20:03:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:03.883 20:03:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:03.883 20:03:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:03.883 20:03:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:03.883 20:03:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:03.883 20:03:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:03.883 20:03:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:03.883 20:03:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:03.883 20:03:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:03.883 20:03:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:03.883 20:03:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:03.883 20:03:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:03.883 20:03:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.883 20:03:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.883 20:03:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.142 20:03:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:04.142 "name": "Existed_Raid", 00:08:04.142 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:04.142 "strip_size_kb": 64, 00:08:04.142 "state": "configuring", 00:08:04.142 "raid_level": "raid0", 00:08:04.142 "superblock": false, 00:08:04.142 "num_base_bdevs": 3, 00:08:04.142 "num_base_bdevs_discovered": 2, 00:08:04.142 "num_base_bdevs_operational": 3, 00:08:04.142 "base_bdevs_list": [ 00:08:04.142 { 00:08:04.142 "name": "BaseBdev1", 00:08:04.142 "uuid": "f52c602c-d42c-4604-840d-245ec0ae8d57", 00:08:04.142 "is_configured": true, 00:08:04.142 "data_offset": 0, 00:08:04.142 "data_size": 65536 00:08:04.142 }, 00:08:04.142 { 00:08:04.142 "name": null, 00:08:04.142 "uuid": "f352a1e1-a481-4338-a460-ecf488934abc", 00:08:04.142 "is_configured": false, 00:08:04.142 "data_offset": 0, 00:08:04.142 "data_size": 65536 00:08:04.142 }, 00:08:04.142 { 00:08:04.142 "name": "BaseBdev3", 00:08:04.142 "uuid": "80f25461-ea9f-46d6-9a95-44a1c91ea4f4", 00:08:04.142 "is_configured": true, 00:08:04.142 "data_offset": 0, 00:08:04.142 "data_size": 65536 00:08:04.142 } 00:08:04.142 ] 00:08:04.142 }' 00:08:04.142 20:03:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:04.142 20:03:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.401 20:03:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:04.401 20:03:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:04.401 20:03:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.401 20:03:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.401 20:03:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.401 20:03:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:04.401 20:03:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:04.401 20:03:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.401 20:03:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.401 [2024-12-08 20:03:36.310748] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:04.661 20:03:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.661 20:03:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:04.661 20:03:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:04.661 20:03:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:04.661 20:03:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:04.661 20:03:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:04.661 20:03:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:04.661 20:03:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:04.661 20:03:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:04.661 20:03:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:04.661 20:03:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:04.661 20:03:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:04.661 20:03:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:04.661 20:03:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.661 20:03:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.661 20:03:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.661 20:03:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:04.661 "name": "Existed_Raid", 00:08:04.661 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:04.661 "strip_size_kb": 64, 00:08:04.661 "state": "configuring", 00:08:04.661 "raid_level": "raid0", 00:08:04.661 "superblock": false, 00:08:04.661 "num_base_bdevs": 3, 00:08:04.661 "num_base_bdevs_discovered": 1, 00:08:04.661 "num_base_bdevs_operational": 3, 00:08:04.661 "base_bdevs_list": [ 00:08:04.661 { 00:08:04.661 "name": null, 00:08:04.661 "uuid": "f52c602c-d42c-4604-840d-245ec0ae8d57", 00:08:04.661 "is_configured": false, 00:08:04.661 "data_offset": 0, 00:08:04.661 "data_size": 65536 00:08:04.661 }, 00:08:04.661 { 00:08:04.661 "name": null, 00:08:04.661 "uuid": "f352a1e1-a481-4338-a460-ecf488934abc", 00:08:04.661 "is_configured": false, 00:08:04.661 "data_offset": 0, 00:08:04.661 "data_size": 65536 00:08:04.661 }, 00:08:04.661 { 00:08:04.661 "name": "BaseBdev3", 00:08:04.661 "uuid": "80f25461-ea9f-46d6-9a95-44a1c91ea4f4", 00:08:04.661 "is_configured": true, 00:08:04.661 "data_offset": 0, 00:08:04.661 "data_size": 65536 00:08:04.661 } 00:08:04.661 ] 00:08:04.661 }' 00:08:04.661 20:03:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:04.661 20:03:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.920 20:03:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:04.920 20:03:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.920 20:03:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.920 20:03:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:04.920 20:03:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.920 20:03:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:04.920 20:03:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:04.920 20:03:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.920 20:03:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.920 [2024-12-08 20:03:36.895564] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:05.179 20:03:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.179 20:03:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:05.179 20:03:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:05.179 20:03:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:05.179 20:03:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:05.179 20:03:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:05.179 20:03:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:05.179 20:03:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:05.179 20:03:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:05.179 20:03:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:05.179 20:03:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:05.179 20:03:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:05.179 20:03:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.179 20:03:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.179 20:03:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:05.179 20:03:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.179 20:03:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:05.179 "name": "Existed_Raid", 00:08:05.179 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:05.179 "strip_size_kb": 64, 00:08:05.179 "state": "configuring", 00:08:05.179 "raid_level": "raid0", 00:08:05.179 "superblock": false, 00:08:05.179 "num_base_bdevs": 3, 00:08:05.179 "num_base_bdevs_discovered": 2, 00:08:05.179 "num_base_bdevs_operational": 3, 00:08:05.179 "base_bdevs_list": [ 00:08:05.179 { 00:08:05.179 "name": null, 00:08:05.179 "uuid": "f52c602c-d42c-4604-840d-245ec0ae8d57", 00:08:05.179 "is_configured": false, 00:08:05.179 "data_offset": 0, 00:08:05.179 "data_size": 65536 00:08:05.179 }, 00:08:05.179 { 00:08:05.179 "name": "BaseBdev2", 00:08:05.179 "uuid": "f352a1e1-a481-4338-a460-ecf488934abc", 00:08:05.179 "is_configured": true, 00:08:05.179 "data_offset": 0, 00:08:05.179 "data_size": 65536 00:08:05.179 }, 00:08:05.179 { 00:08:05.179 "name": "BaseBdev3", 00:08:05.179 "uuid": "80f25461-ea9f-46d6-9a95-44a1c91ea4f4", 00:08:05.179 "is_configured": true, 00:08:05.179 "data_offset": 0, 00:08:05.179 "data_size": 65536 00:08:05.179 } 00:08:05.179 ] 00:08:05.179 }' 00:08:05.179 20:03:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:05.179 20:03:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.438 20:03:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:05.438 20:03:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:05.438 20:03:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.438 20:03:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.438 20:03:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.438 20:03:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:05.438 20:03:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:05.438 20:03:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:05.438 20:03:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.438 20:03:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.438 20:03:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.438 20:03:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u f52c602c-d42c-4604-840d-245ec0ae8d57 00:08:05.438 20:03:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.438 20:03:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.698 [2024-12-08 20:03:37.439823] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:05.698 [2024-12-08 20:03:37.439868] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:05.698 [2024-12-08 20:03:37.439878] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:05.698 [2024-12-08 20:03:37.440174] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:05.698 [2024-12-08 20:03:37.440329] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:05.698 [2024-12-08 20:03:37.440340] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:08:05.698 [2024-12-08 20:03:37.440674] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:05.698 NewBaseBdev 00:08:05.698 20:03:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.698 20:03:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:05.698 20:03:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:08:05.698 20:03:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:05.698 20:03:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:05.698 20:03:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:05.698 20:03:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:05.698 20:03:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:05.698 20:03:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.698 20:03:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.698 20:03:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.698 20:03:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:05.698 20:03:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.698 20:03:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.698 [ 00:08:05.698 { 00:08:05.698 "name": "NewBaseBdev", 00:08:05.698 "aliases": [ 00:08:05.698 "f52c602c-d42c-4604-840d-245ec0ae8d57" 00:08:05.698 ], 00:08:05.698 "product_name": "Malloc disk", 00:08:05.698 "block_size": 512, 00:08:05.698 "num_blocks": 65536, 00:08:05.698 "uuid": "f52c602c-d42c-4604-840d-245ec0ae8d57", 00:08:05.698 "assigned_rate_limits": { 00:08:05.698 "rw_ios_per_sec": 0, 00:08:05.698 "rw_mbytes_per_sec": 0, 00:08:05.698 "r_mbytes_per_sec": 0, 00:08:05.698 "w_mbytes_per_sec": 0 00:08:05.698 }, 00:08:05.698 "claimed": true, 00:08:05.698 "claim_type": "exclusive_write", 00:08:05.698 "zoned": false, 00:08:05.698 "supported_io_types": { 00:08:05.698 "read": true, 00:08:05.698 "write": true, 00:08:05.698 "unmap": true, 00:08:05.698 "flush": true, 00:08:05.698 "reset": true, 00:08:05.698 "nvme_admin": false, 00:08:05.698 "nvme_io": false, 00:08:05.698 "nvme_io_md": false, 00:08:05.698 "write_zeroes": true, 00:08:05.698 "zcopy": true, 00:08:05.698 "get_zone_info": false, 00:08:05.698 "zone_management": false, 00:08:05.698 "zone_append": false, 00:08:05.698 "compare": false, 00:08:05.698 "compare_and_write": false, 00:08:05.698 "abort": true, 00:08:05.698 "seek_hole": false, 00:08:05.698 "seek_data": false, 00:08:05.698 "copy": true, 00:08:05.698 "nvme_iov_md": false 00:08:05.698 }, 00:08:05.698 "memory_domains": [ 00:08:05.698 { 00:08:05.698 "dma_device_id": "system", 00:08:05.698 "dma_device_type": 1 00:08:05.698 }, 00:08:05.698 { 00:08:05.698 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:05.698 "dma_device_type": 2 00:08:05.698 } 00:08:05.698 ], 00:08:05.698 "driver_specific": {} 00:08:05.698 } 00:08:05.698 ] 00:08:05.698 20:03:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.698 20:03:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:05.698 20:03:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:05.698 20:03:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:05.698 20:03:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:05.698 20:03:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:05.698 20:03:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:05.698 20:03:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:05.698 20:03:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:05.698 20:03:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:05.698 20:03:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:05.698 20:03:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:05.698 20:03:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:05.698 20:03:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:05.698 20:03:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.698 20:03:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.698 20:03:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.698 20:03:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:05.698 "name": "Existed_Raid", 00:08:05.698 "uuid": "84713715-cfef-4d56-8ff9-f8bfb15a7beb", 00:08:05.698 "strip_size_kb": 64, 00:08:05.698 "state": "online", 00:08:05.698 "raid_level": "raid0", 00:08:05.698 "superblock": false, 00:08:05.698 "num_base_bdevs": 3, 00:08:05.698 "num_base_bdevs_discovered": 3, 00:08:05.698 "num_base_bdevs_operational": 3, 00:08:05.698 "base_bdevs_list": [ 00:08:05.698 { 00:08:05.699 "name": "NewBaseBdev", 00:08:05.699 "uuid": "f52c602c-d42c-4604-840d-245ec0ae8d57", 00:08:05.699 "is_configured": true, 00:08:05.699 "data_offset": 0, 00:08:05.699 "data_size": 65536 00:08:05.699 }, 00:08:05.699 { 00:08:05.699 "name": "BaseBdev2", 00:08:05.699 "uuid": "f352a1e1-a481-4338-a460-ecf488934abc", 00:08:05.699 "is_configured": true, 00:08:05.699 "data_offset": 0, 00:08:05.699 "data_size": 65536 00:08:05.699 }, 00:08:05.699 { 00:08:05.699 "name": "BaseBdev3", 00:08:05.699 "uuid": "80f25461-ea9f-46d6-9a95-44a1c91ea4f4", 00:08:05.699 "is_configured": true, 00:08:05.699 "data_offset": 0, 00:08:05.699 "data_size": 65536 00:08:05.699 } 00:08:05.699 ] 00:08:05.699 }' 00:08:05.699 20:03:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:05.699 20:03:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.959 20:03:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:05.959 20:03:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:05.959 20:03:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:05.959 20:03:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:05.959 20:03:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:05.959 20:03:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:05.959 20:03:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:05.959 20:03:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:05.959 20:03:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.959 20:03:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.959 [2024-12-08 20:03:37.919420] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:06.236 20:03:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.237 20:03:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:06.237 "name": "Existed_Raid", 00:08:06.237 "aliases": [ 00:08:06.237 "84713715-cfef-4d56-8ff9-f8bfb15a7beb" 00:08:06.237 ], 00:08:06.237 "product_name": "Raid Volume", 00:08:06.237 "block_size": 512, 00:08:06.237 "num_blocks": 196608, 00:08:06.237 "uuid": "84713715-cfef-4d56-8ff9-f8bfb15a7beb", 00:08:06.237 "assigned_rate_limits": { 00:08:06.237 "rw_ios_per_sec": 0, 00:08:06.237 "rw_mbytes_per_sec": 0, 00:08:06.237 "r_mbytes_per_sec": 0, 00:08:06.237 "w_mbytes_per_sec": 0 00:08:06.237 }, 00:08:06.237 "claimed": false, 00:08:06.237 "zoned": false, 00:08:06.237 "supported_io_types": { 00:08:06.237 "read": true, 00:08:06.237 "write": true, 00:08:06.237 "unmap": true, 00:08:06.237 "flush": true, 00:08:06.237 "reset": true, 00:08:06.237 "nvme_admin": false, 00:08:06.237 "nvme_io": false, 00:08:06.237 "nvme_io_md": false, 00:08:06.237 "write_zeroes": true, 00:08:06.237 "zcopy": false, 00:08:06.237 "get_zone_info": false, 00:08:06.237 "zone_management": false, 00:08:06.237 "zone_append": false, 00:08:06.237 "compare": false, 00:08:06.237 "compare_and_write": false, 00:08:06.237 "abort": false, 00:08:06.237 "seek_hole": false, 00:08:06.237 "seek_data": false, 00:08:06.237 "copy": false, 00:08:06.237 "nvme_iov_md": false 00:08:06.237 }, 00:08:06.237 "memory_domains": [ 00:08:06.237 { 00:08:06.237 "dma_device_id": "system", 00:08:06.237 "dma_device_type": 1 00:08:06.237 }, 00:08:06.237 { 00:08:06.237 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:06.237 "dma_device_type": 2 00:08:06.237 }, 00:08:06.237 { 00:08:06.237 "dma_device_id": "system", 00:08:06.237 "dma_device_type": 1 00:08:06.237 }, 00:08:06.237 { 00:08:06.237 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:06.237 "dma_device_type": 2 00:08:06.237 }, 00:08:06.237 { 00:08:06.237 "dma_device_id": "system", 00:08:06.237 "dma_device_type": 1 00:08:06.237 }, 00:08:06.237 { 00:08:06.237 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:06.237 "dma_device_type": 2 00:08:06.237 } 00:08:06.237 ], 00:08:06.237 "driver_specific": { 00:08:06.237 "raid": { 00:08:06.237 "uuid": "84713715-cfef-4d56-8ff9-f8bfb15a7beb", 00:08:06.237 "strip_size_kb": 64, 00:08:06.237 "state": "online", 00:08:06.237 "raid_level": "raid0", 00:08:06.237 "superblock": false, 00:08:06.237 "num_base_bdevs": 3, 00:08:06.237 "num_base_bdevs_discovered": 3, 00:08:06.237 "num_base_bdevs_operational": 3, 00:08:06.237 "base_bdevs_list": [ 00:08:06.237 { 00:08:06.237 "name": "NewBaseBdev", 00:08:06.237 "uuid": "f52c602c-d42c-4604-840d-245ec0ae8d57", 00:08:06.237 "is_configured": true, 00:08:06.237 "data_offset": 0, 00:08:06.237 "data_size": 65536 00:08:06.237 }, 00:08:06.237 { 00:08:06.237 "name": "BaseBdev2", 00:08:06.237 "uuid": "f352a1e1-a481-4338-a460-ecf488934abc", 00:08:06.237 "is_configured": true, 00:08:06.237 "data_offset": 0, 00:08:06.237 "data_size": 65536 00:08:06.237 }, 00:08:06.237 { 00:08:06.237 "name": "BaseBdev3", 00:08:06.237 "uuid": "80f25461-ea9f-46d6-9a95-44a1c91ea4f4", 00:08:06.237 "is_configured": true, 00:08:06.237 "data_offset": 0, 00:08:06.237 "data_size": 65536 00:08:06.237 } 00:08:06.237 ] 00:08:06.237 } 00:08:06.237 } 00:08:06.237 }' 00:08:06.237 20:03:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:06.237 20:03:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:06.237 BaseBdev2 00:08:06.237 BaseBdev3' 00:08:06.237 20:03:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:06.237 20:03:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:06.237 20:03:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:06.237 20:03:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:06.237 20:03:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.237 20:03:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:06.237 20:03:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.237 20:03:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.237 20:03:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:06.237 20:03:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:06.237 20:03:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:06.237 20:03:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:06.237 20:03:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.237 20:03:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:06.237 20:03:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.237 20:03:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.237 20:03:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:06.237 20:03:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:06.237 20:03:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:06.237 20:03:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:06.237 20:03:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:06.237 20:03:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.237 20:03:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.237 20:03:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.237 20:03:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:06.237 20:03:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:06.237 20:03:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:06.237 20:03:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.237 20:03:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.237 [2024-12-08 20:03:38.162622] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:06.237 [2024-12-08 20:03:38.162694] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:06.237 [2024-12-08 20:03:38.162795] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:06.238 [2024-12-08 20:03:38.162906] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:06.238 [2024-12-08 20:03:38.162991] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:08:06.238 20:03:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.238 20:03:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 63660 00:08:06.238 20:03:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 63660 ']' 00:08:06.238 20:03:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 63660 00:08:06.238 20:03:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:08:06.238 20:03:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:06.238 20:03:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63660 00:08:06.542 20:03:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:06.542 20:03:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:06.542 killing process with pid 63660 00:08:06.542 20:03:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63660' 00:08:06.543 20:03:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 63660 00:08:06.543 [2024-12-08 20:03:38.207555] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:06.543 20:03:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 63660 00:08:06.543 [2024-12-08 20:03:38.506290] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:07.970 20:03:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:07.970 00:08:07.970 real 0m10.586s 00:08:07.970 user 0m16.668s 00:08:07.970 sys 0m1.936s 00:08:07.970 20:03:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:07.970 ************************************ 00:08:07.970 END TEST raid_state_function_test 00:08:07.970 ************************************ 00:08:07.970 20:03:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.970 20:03:39 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:08:07.970 20:03:39 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:07.970 20:03:39 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:07.970 20:03:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:07.970 ************************************ 00:08:07.970 START TEST raid_state_function_test_sb 00:08:07.970 ************************************ 00:08:07.970 20:03:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 3 true 00:08:07.970 20:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:08:07.970 20:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:07.970 20:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:07.970 20:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:07.970 20:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:07.970 20:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:07.970 20:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:07.970 20:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:07.970 20:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:07.970 20:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:07.970 20:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:07.970 20:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:07.970 20:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:07.970 20:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:07.970 20:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:07.970 20:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:07.970 20:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:07.970 20:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:07.970 20:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:07.970 20:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:07.970 20:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:07.970 20:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:08:07.970 20:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:07.970 20:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:07.970 20:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:07.970 20:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:07.970 20:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=64283 00:08:07.970 20:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:07.970 20:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 64283' 00:08:07.970 Process raid pid: 64283 00:08:07.971 20:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 64283 00:08:07.971 20:03:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 64283 ']' 00:08:07.971 20:03:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:07.971 20:03:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:07.971 20:03:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:07.971 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:07.971 20:03:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:07.971 20:03:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:07.971 [2024-12-08 20:03:39.767990] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:08:07.971 [2024-12-08 20:03:39.768196] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:07.971 [2024-12-08 20:03:39.928001] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:08.230 [2024-12-08 20:03:40.037036] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.489 [2024-12-08 20:03:40.237819] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:08.489 [2024-12-08 20:03:40.237957] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:08.749 20:03:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:08.749 20:03:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:08:08.749 20:03:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:08.749 20:03:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.749 20:03:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.749 [2024-12-08 20:03:40.601745] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:08.749 [2024-12-08 20:03:40.601863] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:08.749 [2024-12-08 20:03:40.601884] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:08.749 [2024-12-08 20:03:40.601894] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:08.749 [2024-12-08 20:03:40.601901] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:08.749 [2024-12-08 20:03:40.601909] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:08.749 20:03:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.749 20:03:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:08.749 20:03:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:08.749 20:03:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:08.749 20:03:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:08.749 20:03:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:08.749 20:03:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:08.749 20:03:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:08.749 20:03:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:08.749 20:03:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:08.749 20:03:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:08.749 20:03:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:08.749 20:03:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:08.749 20:03:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.749 20:03:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.749 20:03:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.749 20:03:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:08.749 "name": "Existed_Raid", 00:08:08.749 "uuid": "6fffdd05-1fe4-4f90-a936-ec37d243fb6c", 00:08:08.749 "strip_size_kb": 64, 00:08:08.749 "state": "configuring", 00:08:08.749 "raid_level": "raid0", 00:08:08.749 "superblock": true, 00:08:08.749 "num_base_bdevs": 3, 00:08:08.749 "num_base_bdevs_discovered": 0, 00:08:08.749 "num_base_bdevs_operational": 3, 00:08:08.749 "base_bdevs_list": [ 00:08:08.749 { 00:08:08.749 "name": "BaseBdev1", 00:08:08.749 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:08.749 "is_configured": false, 00:08:08.749 "data_offset": 0, 00:08:08.749 "data_size": 0 00:08:08.749 }, 00:08:08.749 { 00:08:08.749 "name": "BaseBdev2", 00:08:08.749 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:08.749 "is_configured": false, 00:08:08.749 "data_offset": 0, 00:08:08.749 "data_size": 0 00:08:08.749 }, 00:08:08.749 { 00:08:08.749 "name": "BaseBdev3", 00:08:08.749 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:08.749 "is_configured": false, 00:08:08.749 "data_offset": 0, 00:08:08.749 "data_size": 0 00:08:08.749 } 00:08:08.749 ] 00:08:08.749 }' 00:08:08.749 20:03:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:08.749 20:03:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.320 20:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:09.320 20:03:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.320 20:03:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.320 [2024-12-08 20:03:41.012994] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:09.320 [2024-12-08 20:03:41.013074] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:09.320 20:03:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.320 20:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:09.320 20:03:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.320 20:03:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.320 [2024-12-08 20:03:41.024975] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:09.320 [2024-12-08 20:03:41.025055] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:09.320 [2024-12-08 20:03:41.025099] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:09.320 [2024-12-08 20:03:41.025123] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:09.320 [2024-12-08 20:03:41.025142] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:09.320 [2024-12-08 20:03:41.025163] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:09.320 20:03:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.320 20:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:09.320 20:03:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.320 20:03:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.320 [2024-12-08 20:03:41.072709] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:09.320 BaseBdev1 00:08:09.320 20:03:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.320 20:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:09.320 20:03:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:09.320 20:03:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:09.320 20:03:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:09.320 20:03:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:09.320 20:03:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:09.320 20:03:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:09.320 20:03:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.320 20:03:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.320 20:03:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.320 20:03:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:09.320 20:03:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.320 20:03:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.320 [ 00:08:09.320 { 00:08:09.320 "name": "BaseBdev1", 00:08:09.320 "aliases": [ 00:08:09.320 "532c3ed4-79fd-4c35-9005-6fb985486295" 00:08:09.320 ], 00:08:09.320 "product_name": "Malloc disk", 00:08:09.320 "block_size": 512, 00:08:09.320 "num_blocks": 65536, 00:08:09.320 "uuid": "532c3ed4-79fd-4c35-9005-6fb985486295", 00:08:09.320 "assigned_rate_limits": { 00:08:09.320 "rw_ios_per_sec": 0, 00:08:09.320 "rw_mbytes_per_sec": 0, 00:08:09.320 "r_mbytes_per_sec": 0, 00:08:09.320 "w_mbytes_per_sec": 0 00:08:09.320 }, 00:08:09.320 "claimed": true, 00:08:09.320 "claim_type": "exclusive_write", 00:08:09.320 "zoned": false, 00:08:09.320 "supported_io_types": { 00:08:09.320 "read": true, 00:08:09.320 "write": true, 00:08:09.320 "unmap": true, 00:08:09.320 "flush": true, 00:08:09.320 "reset": true, 00:08:09.320 "nvme_admin": false, 00:08:09.320 "nvme_io": false, 00:08:09.320 "nvme_io_md": false, 00:08:09.320 "write_zeroes": true, 00:08:09.320 "zcopy": true, 00:08:09.320 "get_zone_info": false, 00:08:09.320 "zone_management": false, 00:08:09.320 "zone_append": false, 00:08:09.320 "compare": false, 00:08:09.320 "compare_and_write": false, 00:08:09.320 "abort": true, 00:08:09.320 "seek_hole": false, 00:08:09.320 "seek_data": false, 00:08:09.320 "copy": true, 00:08:09.320 "nvme_iov_md": false 00:08:09.320 }, 00:08:09.320 "memory_domains": [ 00:08:09.320 { 00:08:09.320 "dma_device_id": "system", 00:08:09.320 "dma_device_type": 1 00:08:09.320 }, 00:08:09.320 { 00:08:09.320 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:09.320 "dma_device_type": 2 00:08:09.320 } 00:08:09.320 ], 00:08:09.320 "driver_specific": {} 00:08:09.320 } 00:08:09.320 ] 00:08:09.320 20:03:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.320 20:03:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:09.320 20:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:09.320 20:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:09.320 20:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:09.320 20:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:09.320 20:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:09.320 20:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:09.320 20:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:09.320 20:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:09.320 20:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:09.320 20:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:09.320 20:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:09.321 20:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:09.321 20:03:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.321 20:03:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.321 20:03:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.321 20:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:09.321 "name": "Existed_Raid", 00:08:09.321 "uuid": "02d42ce0-976b-416a-bcfc-352aef2dd75c", 00:08:09.321 "strip_size_kb": 64, 00:08:09.321 "state": "configuring", 00:08:09.321 "raid_level": "raid0", 00:08:09.321 "superblock": true, 00:08:09.321 "num_base_bdevs": 3, 00:08:09.321 "num_base_bdevs_discovered": 1, 00:08:09.321 "num_base_bdevs_operational": 3, 00:08:09.321 "base_bdevs_list": [ 00:08:09.321 { 00:08:09.321 "name": "BaseBdev1", 00:08:09.321 "uuid": "532c3ed4-79fd-4c35-9005-6fb985486295", 00:08:09.321 "is_configured": true, 00:08:09.321 "data_offset": 2048, 00:08:09.321 "data_size": 63488 00:08:09.321 }, 00:08:09.321 { 00:08:09.321 "name": "BaseBdev2", 00:08:09.321 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:09.321 "is_configured": false, 00:08:09.321 "data_offset": 0, 00:08:09.321 "data_size": 0 00:08:09.321 }, 00:08:09.321 { 00:08:09.321 "name": "BaseBdev3", 00:08:09.321 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:09.321 "is_configured": false, 00:08:09.321 "data_offset": 0, 00:08:09.321 "data_size": 0 00:08:09.321 } 00:08:09.321 ] 00:08:09.321 }' 00:08:09.321 20:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:09.321 20:03:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.581 20:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:09.581 20:03:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.581 20:03:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.581 [2024-12-08 20:03:41.532048] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:09.581 [2024-12-08 20:03:41.532145] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:09.581 20:03:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.581 20:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:09.581 20:03:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.581 20:03:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.581 [2024-12-08 20:03:41.544067] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:09.581 [2024-12-08 20:03:41.545930] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:09.581 [2024-12-08 20:03:41.546035] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:09.581 [2024-12-08 20:03:41.546064] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:09.581 [2024-12-08 20:03:41.546087] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:09.581 20:03:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.581 20:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:09.581 20:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:09.581 20:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:09.581 20:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:09.581 20:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:09.581 20:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:09.581 20:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:09.581 20:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:09.581 20:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:09.581 20:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:09.581 20:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:09.581 20:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:09.581 20:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:09.581 20:03:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.581 20:03:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.581 20:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:09.841 20:03:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.841 20:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:09.841 "name": "Existed_Raid", 00:08:09.841 "uuid": "200f217c-5f3b-432e-b2ec-38c3d8a86e22", 00:08:09.841 "strip_size_kb": 64, 00:08:09.841 "state": "configuring", 00:08:09.841 "raid_level": "raid0", 00:08:09.841 "superblock": true, 00:08:09.841 "num_base_bdevs": 3, 00:08:09.841 "num_base_bdevs_discovered": 1, 00:08:09.841 "num_base_bdevs_operational": 3, 00:08:09.841 "base_bdevs_list": [ 00:08:09.841 { 00:08:09.841 "name": "BaseBdev1", 00:08:09.841 "uuid": "532c3ed4-79fd-4c35-9005-6fb985486295", 00:08:09.841 "is_configured": true, 00:08:09.841 "data_offset": 2048, 00:08:09.841 "data_size": 63488 00:08:09.841 }, 00:08:09.841 { 00:08:09.841 "name": "BaseBdev2", 00:08:09.841 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:09.841 "is_configured": false, 00:08:09.841 "data_offset": 0, 00:08:09.841 "data_size": 0 00:08:09.841 }, 00:08:09.841 { 00:08:09.841 "name": "BaseBdev3", 00:08:09.841 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:09.841 "is_configured": false, 00:08:09.841 "data_offset": 0, 00:08:09.841 "data_size": 0 00:08:09.841 } 00:08:09.841 ] 00:08:09.841 }' 00:08:09.841 20:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:09.841 20:03:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.100 20:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:10.100 20:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.100 20:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.100 [2024-12-08 20:03:42.062367] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:10.100 BaseBdev2 00:08:10.100 20:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.100 20:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:10.100 20:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:10.100 20:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:10.100 20:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:10.100 20:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:10.100 20:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:10.100 20:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:10.100 20:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.100 20:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.100 20:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.100 20:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:10.101 20:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.101 20:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.360 [ 00:08:10.360 { 00:08:10.360 "name": "BaseBdev2", 00:08:10.360 "aliases": [ 00:08:10.360 "c3831707-d44f-4e87-b20c-1cadfe7a55f8" 00:08:10.360 ], 00:08:10.360 "product_name": "Malloc disk", 00:08:10.360 "block_size": 512, 00:08:10.360 "num_blocks": 65536, 00:08:10.360 "uuid": "c3831707-d44f-4e87-b20c-1cadfe7a55f8", 00:08:10.360 "assigned_rate_limits": { 00:08:10.360 "rw_ios_per_sec": 0, 00:08:10.360 "rw_mbytes_per_sec": 0, 00:08:10.360 "r_mbytes_per_sec": 0, 00:08:10.360 "w_mbytes_per_sec": 0 00:08:10.360 }, 00:08:10.360 "claimed": true, 00:08:10.360 "claim_type": "exclusive_write", 00:08:10.360 "zoned": false, 00:08:10.360 "supported_io_types": { 00:08:10.360 "read": true, 00:08:10.360 "write": true, 00:08:10.360 "unmap": true, 00:08:10.360 "flush": true, 00:08:10.360 "reset": true, 00:08:10.360 "nvme_admin": false, 00:08:10.360 "nvme_io": false, 00:08:10.360 "nvme_io_md": false, 00:08:10.360 "write_zeroes": true, 00:08:10.360 "zcopy": true, 00:08:10.360 "get_zone_info": false, 00:08:10.360 "zone_management": false, 00:08:10.360 "zone_append": false, 00:08:10.360 "compare": false, 00:08:10.360 "compare_and_write": false, 00:08:10.360 "abort": true, 00:08:10.360 "seek_hole": false, 00:08:10.360 "seek_data": false, 00:08:10.360 "copy": true, 00:08:10.360 "nvme_iov_md": false 00:08:10.360 }, 00:08:10.360 "memory_domains": [ 00:08:10.360 { 00:08:10.360 "dma_device_id": "system", 00:08:10.360 "dma_device_type": 1 00:08:10.360 }, 00:08:10.360 { 00:08:10.360 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:10.360 "dma_device_type": 2 00:08:10.360 } 00:08:10.360 ], 00:08:10.360 "driver_specific": {} 00:08:10.360 } 00:08:10.360 ] 00:08:10.360 20:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.360 20:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:10.360 20:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:10.360 20:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:10.360 20:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:10.360 20:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:10.360 20:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:10.360 20:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:10.360 20:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:10.360 20:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:10.360 20:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:10.360 20:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:10.360 20:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:10.360 20:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:10.360 20:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:10.360 20:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:10.360 20:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.360 20:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.360 20:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.360 20:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:10.360 "name": "Existed_Raid", 00:08:10.360 "uuid": "200f217c-5f3b-432e-b2ec-38c3d8a86e22", 00:08:10.360 "strip_size_kb": 64, 00:08:10.360 "state": "configuring", 00:08:10.360 "raid_level": "raid0", 00:08:10.360 "superblock": true, 00:08:10.360 "num_base_bdevs": 3, 00:08:10.361 "num_base_bdevs_discovered": 2, 00:08:10.361 "num_base_bdevs_operational": 3, 00:08:10.361 "base_bdevs_list": [ 00:08:10.361 { 00:08:10.361 "name": "BaseBdev1", 00:08:10.361 "uuid": "532c3ed4-79fd-4c35-9005-6fb985486295", 00:08:10.361 "is_configured": true, 00:08:10.361 "data_offset": 2048, 00:08:10.361 "data_size": 63488 00:08:10.361 }, 00:08:10.361 { 00:08:10.361 "name": "BaseBdev2", 00:08:10.361 "uuid": "c3831707-d44f-4e87-b20c-1cadfe7a55f8", 00:08:10.361 "is_configured": true, 00:08:10.361 "data_offset": 2048, 00:08:10.361 "data_size": 63488 00:08:10.361 }, 00:08:10.361 { 00:08:10.361 "name": "BaseBdev3", 00:08:10.361 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:10.361 "is_configured": false, 00:08:10.361 "data_offset": 0, 00:08:10.361 "data_size": 0 00:08:10.361 } 00:08:10.361 ] 00:08:10.361 }' 00:08:10.361 20:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:10.361 20:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.621 20:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:10.621 20:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.621 20:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.621 [2024-12-08 20:03:42.589482] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:10.621 [2024-12-08 20:03:42.589755] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:10.621 [2024-12-08 20:03:42.589777] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:10.621 [2024-12-08 20:03:42.590140] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:10.621 [2024-12-08 20:03:42.590375] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:10.621 [2024-12-08 20:03:42.590421] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:10.621 BaseBdev3 00:08:10.621 [2024-12-08 20:03:42.590638] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:10.621 20:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.621 20:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:10.621 20:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:10.621 20:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:10.621 20:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:10.621 20:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:10.621 20:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:10.621 20:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:10.621 20:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.621 20:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.881 20:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.881 20:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:10.881 20:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.881 20:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.881 [ 00:08:10.881 { 00:08:10.881 "name": "BaseBdev3", 00:08:10.881 "aliases": [ 00:08:10.881 "6bdfb7a3-28ba-49b7-ae10-1cb1d2d50cbe" 00:08:10.881 ], 00:08:10.881 "product_name": "Malloc disk", 00:08:10.881 "block_size": 512, 00:08:10.881 "num_blocks": 65536, 00:08:10.881 "uuid": "6bdfb7a3-28ba-49b7-ae10-1cb1d2d50cbe", 00:08:10.881 "assigned_rate_limits": { 00:08:10.881 "rw_ios_per_sec": 0, 00:08:10.881 "rw_mbytes_per_sec": 0, 00:08:10.881 "r_mbytes_per_sec": 0, 00:08:10.881 "w_mbytes_per_sec": 0 00:08:10.881 }, 00:08:10.881 "claimed": true, 00:08:10.881 "claim_type": "exclusive_write", 00:08:10.881 "zoned": false, 00:08:10.881 "supported_io_types": { 00:08:10.881 "read": true, 00:08:10.881 "write": true, 00:08:10.881 "unmap": true, 00:08:10.881 "flush": true, 00:08:10.881 "reset": true, 00:08:10.881 "nvme_admin": false, 00:08:10.881 "nvme_io": false, 00:08:10.881 "nvme_io_md": false, 00:08:10.881 "write_zeroes": true, 00:08:10.881 "zcopy": true, 00:08:10.881 "get_zone_info": false, 00:08:10.881 "zone_management": false, 00:08:10.881 "zone_append": false, 00:08:10.881 "compare": false, 00:08:10.881 "compare_and_write": false, 00:08:10.881 "abort": true, 00:08:10.881 "seek_hole": false, 00:08:10.881 "seek_data": false, 00:08:10.881 "copy": true, 00:08:10.881 "nvme_iov_md": false 00:08:10.881 }, 00:08:10.881 "memory_domains": [ 00:08:10.881 { 00:08:10.881 "dma_device_id": "system", 00:08:10.881 "dma_device_type": 1 00:08:10.881 }, 00:08:10.881 { 00:08:10.881 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:10.881 "dma_device_type": 2 00:08:10.881 } 00:08:10.881 ], 00:08:10.881 "driver_specific": {} 00:08:10.881 } 00:08:10.881 ] 00:08:10.881 20:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.881 20:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:10.881 20:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:10.881 20:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:10.881 20:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:10.881 20:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:10.881 20:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:10.881 20:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:10.881 20:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:10.881 20:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:10.881 20:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:10.881 20:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:10.881 20:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:10.881 20:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:10.881 20:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:10.881 20:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.881 20:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.881 20:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:10.881 20:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.881 20:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:10.882 "name": "Existed_Raid", 00:08:10.882 "uuid": "200f217c-5f3b-432e-b2ec-38c3d8a86e22", 00:08:10.882 "strip_size_kb": 64, 00:08:10.882 "state": "online", 00:08:10.882 "raid_level": "raid0", 00:08:10.882 "superblock": true, 00:08:10.882 "num_base_bdevs": 3, 00:08:10.882 "num_base_bdevs_discovered": 3, 00:08:10.882 "num_base_bdevs_operational": 3, 00:08:10.882 "base_bdevs_list": [ 00:08:10.882 { 00:08:10.882 "name": "BaseBdev1", 00:08:10.882 "uuid": "532c3ed4-79fd-4c35-9005-6fb985486295", 00:08:10.882 "is_configured": true, 00:08:10.882 "data_offset": 2048, 00:08:10.882 "data_size": 63488 00:08:10.882 }, 00:08:10.882 { 00:08:10.882 "name": "BaseBdev2", 00:08:10.882 "uuid": "c3831707-d44f-4e87-b20c-1cadfe7a55f8", 00:08:10.882 "is_configured": true, 00:08:10.882 "data_offset": 2048, 00:08:10.882 "data_size": 63488 00:08:10.882 }, 00:08:10.882 { 00:08:10.882 "name": "BaseBdev3", 00:08:10.882 "uuid": "6bdfb7a3-28ba-49b7-ae10-1cb1d2d50cbe", 00:08:10.882 "is_configured": true, 00:08:10.882 "data_offset": 2048, 00:08:10.882 "data_size": 63488 00:08:10.882 } 00:08:10.882 ] 00:08:10.882 }' 00:08:10.882 20:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:10.882 20:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.142 20:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:11.142 20:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:11.142 20:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:11.142 20:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:11.142 20:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:11.142 20:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:11.142 20:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:11.142 20:03:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.142 20:03:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.142 20:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:11.142 [2024-12-08 20:03:43.073010] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:11.142 20:03:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.142 20:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:11.142 "name": "Existed_Raid", 00:08:11.142 "aliases": [ 00:08:11.142 "200f217c-5f3b-432e-b2ec-38c3d8a86e22" 00:08:11.142 ], 00:08:11.142 "product_name": "Raid Volume", 00:08:11.142 "block_size": 512, 00:08:11.142 "num_blocks": 190464, 00:08:11.142 "uuid": "200f217c-5f3b-432e-b2ec-38c3d8a86e22", 00:08:11.142 "assigned_rate_limits": { 00:08:11.142 "rw_ios_per_sec": 0, 00:08:11.142 "rw_mbytes_per_sec": 0, 00:08:11.142 "r_mbytes_per_sec": 0, 00:08:11.142 "w_mbytes_per_sec": 0 00:08:11.142 }, 00:08:11.142 "claimed": false, 00:08:11.142 "zoned": false, 00:08:11.142 "supported_io_types": { 00:08:11.142 "read": true, 00:08:11.142 "write": true, 00:08:11.142 "unmap": true, 00:08:11.142 "flush": true, 00:08:11.142 "reset": true, 00:08:11.142 "nvme_admin": false, 00:08:11.142 "nvme_io": false, 00:08:11.142 "nvme_io_md": false, 00:08:11.142 "write_zeroes": true, 00:08:11.142 "zcopy": false, 00:08:11.142 "get_zone_info": false, 00:08:11.142 "zone_management": false, 00:08:11.142 "zone_append": false, 00:08:11.142 "compare": false, 00:08:11.142 "compare_and_write": false, 00:08:11.142 "abort": false, 00:08:11.142 "seek_hole": false, 00:08:11.142 "seek_data": false, 00:08:11.142 "copy": false, 00:08:11.142 "nvme_iov_md": false 00:08:11.143 }, 00:08:11.143 "memory_domains": [ 00:08:11.143 { 00:08:11.143 "dma_device_id": "system", 00:08:11.143 "dma_device_type": 1 00:08:11.143 }, 00:08:11.143 { 00:08:11.143 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:11.143 "dma_device_type": 2 00:08:11.143 }, 00:08:11.143 { 00:08:11.143 "dma_device_id": "system", 00:08:11.143 "dma_device_type": 1 00:08:11.143 }, 00:08:11.143 { 00:08:11.143 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:11.143 "dma_device_type": 2 00:08:11.143 }, 00:08:11.143 { 00:08:11.143 "dma_device_id": "system", 00:08:11.143 "dma_device_type": 1 00:08:11.143 }, 00:08:11.143 { 00:08:11.143 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:11.143 "dma_device_type": 2 00:08:11.143 } 00:08:11.143 ], 00:08:11.143 "driver_specific": { 00:08:11.143 "raid": { 00:08:11.143 "uuid": "200f217c-5f3b-432e-b2ec-38c3d8a86e22", 00:08:11.143 "strip_size_kb": 64, 00:08:11.143 "state": "online", 00:08:11.143 "raid_level": "raid0", 00:08:11.143 "superblock": true, 00:08:11.143 "num_base_bdevs": 3, 00:08:11.143 "num_base_bdevs_discovered": 3, 00:08:11.143 "num_base_bdevs_operational": 3, 00:08:11.143 "base_bdevs_list": [ 00:08:11.143 { 00:08:11.143 "name": "BaseBdev1", 00:08:11.143 "uuid": "532c3ed4-79fd-4c35-9005-6fb985486295", 00:08:11.143 "is_configured": true, 00:08:11.143 "data_offset": 2048, 00:08:11.143 "data_size": 63488 00:08:11.143 }, 00:08:11.143 { 00:08:11.143 "name": "BaseBdev2", 00:08:11.143 "uuid": "c3831707-d44f-4e87-b20c-1cadfe7a55f8", 00:08:11.143 "is_configured": true, 00:08:11.143 "data_offset": 2048, 00:08:11.143 "data_size": 63488 00:08:11.143 }, 00:08:11.143 { 00:08:11.143 "name": "BaseBdev3", 00:08:11.143 "uuid": "6bdfb7a3-28ba-49b7-ae10-1cb1d2d50cbe", 00:08:11.143 "is_configured": true, 00:08:11.143 "data_offset": 2048, 00:08:11.143 "data_size": 63488 00:08:11.143 } 00:08:11.143 ] 00:08:11.143 } 00:08:11.143 } 00:08:11.143 }' 00:08:11.404 20:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:11.404 20:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:11.404 BaseBdev2 00:08:11.404 BaseBdev3' 00:08:11.404 20:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:11.404 20:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:11.404 20:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:11.404 20:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:11.404 20:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:11.404 20:03:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.404 20:03:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.404 20:03:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.404 20:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:11.404 20:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:11.404 20:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:11.404 20:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:11.404 20:03:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.404 20:03:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.404 20:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:11.404 20:03:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.404 20:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:11.404 20:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:11.404 20:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:11.404 20:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:11.404 20:03:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.404 20:03:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.404 20:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:11.404 20:03:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.404 20:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:11.404 20:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:11.404 20:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:11.404 20:03:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.404 20:03:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.404 [2024-12-08 20:03:43.324298] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:11.404 [2024-12-08 20:03:43.324369] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:11.404 [2024-12-08 20:03:43.324447] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:11.664 20:03:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.664 20:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:11.664 20:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:08:11.664 20:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:11.664 20:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:08:11.664 20:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:11.664 20:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:08:11.664 20:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:11.664 20:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:11.664 20:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:11.664 20:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:11.664 20:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:11.664 20:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:11.664 20:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:11.664 20:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:11.664 20:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:11.664 20:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:11.664 20:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:11.664 20:03:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.664 20:03:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.664 20:03:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.664 20:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:11.664 "name": "Existed_Raid", 00:08:11.664 "uuid": "200f217c-5f3b-432e-b2ec-38c3d8a86e22", 00:08:11.664 "strip_size_kb": 64, 00:08:11.664 "state": "offline", 00:08:11.664 "raid_level": "raid0", 00:08:11.664 "superblock": true, 00:08:11.664 "num_base_bdevs": 3, 00:08:11.664 "num_base_bdevs_discovered": 2, 00:08:11.664 "num_base_bdevs_operational": 2, 00:08:11.664 "base_bdevs_list": [ 00:08:11.664 { 00:08:11.664 "name": null, 00:08:11.664 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:11.664 "is_configured": false, 00:08:11.664 "data_offset": 0, 00:08:11.664 "data_size": 63488 00:08:11.664 }, 00:08:11.664 { 00:08:11.664 "name": "BaseBdev2", 00:08:11.664 "uuid": "c3831707-d44f-4e87-b20c-1cadfe7a55f8", 00:08:11.664 "is_configured": true, 00:08:11.664 "data_offset": 2048, 00:08:11.664 "data_size": 63488 00:08:11.664 }, 00:08:11.664 { 00:08:11.664 "name": "BaseBdev3", 00:08:11.664 "uuid": "6bdfb7a3-28ba-49b7-ae10-1cb1d2d50cbe", 00:08:11.664 "is_configured": true, 00:08:11.664 "data_offset": 2048, 00:08:11.664 "data_size": 63488 00:08:11.664 } 00:08:11.664 ] 00:08:11.664 }' 00:08:11.664 20:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:11.664 20:03:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.925 20:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:11.925 20:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:11.925 20:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:11.925 20:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:11.925 20:03:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.925 20:03:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.925 20:03:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.925 20:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:11.925 20:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:11.925 20:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:11.925 20:03:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.925 20:03:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.925 [2024-12-08 20:03:43.848602] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:12.185 20:03:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.185 20:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:12.185 20:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:12.185 20:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:12.185 20:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:12.185 20:03:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.185 20:03:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:12.185 20:03:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.185 20:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:12.185 20:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:12.185 20:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:12.185 20:03:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.185 20:03:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:12.185 [2024-12-08 20:03:44.000926] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:12.185 [2024-12-08 20:03:44.001037] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:12.185 20:03:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.185 20:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:12.185 20:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:12.185 20:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:12.185 20:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:12.185 20:03:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.185 20:03:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:12.185 20:03:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.185 20:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:12.185 20:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:12.185 20:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:12.185 20:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:12.185 20:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:12.185 20:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:12.185 20:03:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.185 20:03:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:12.446 BaseBdev2 00:08:12.446 20:03:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.446 20:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:12.446 20:03:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:12.446 20:03:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:12.446 20:03:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:12.446 20:03:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:12.446 20:03:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:12.446 20:03:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:12.446 20:03:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.446 20:03:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:12.446 20:03:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.446 20:03:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:12.446 20:03:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.446 20:03:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:12.446 [ 00:08:12.446 { 00:08:12.446 "name": "BaseBdev2", 00:08:12.446 "aliases": [ 00:08:12.446 "d3d74515-0483-4a52-a17d-975536f2303c" 00:08:12.446 ], 00:08:12.446 "product_name": "Malloc disk", 00:08:12.446 "block_size": 512, 00:08:12.446 "num_blocks": 65536, 00:08:12.446 "uuid": "d3d74515-0483-4a52-a17d-975536f2303c", 00:08:12.446 "assigned_rate_limits": { 00:08:12.446 "rw_ios_per_sec": 0, 00:08:12.446 "rw_mbytes_per_sec": 0, 00:08:12.446 "r_mbytes_per_sec": 0, 00:08:12.446 "w_mbytes_per_sec": 0 00:08:12.446 }, 00:08:12.446 "claimed": false, 00:08:12.446 "zoned": false, 00:08:12.446 "supported_io_types": { 00:08:12.446 "read": true, 00:08:12.446 "write": true, 00:08:12.446 "unmap": true, 00:08:12.446 "flush": true, 00:08:12.446 "reset": true, 00:08:12.446 "nvme_admin": false, 00:08:12.446 "nvme_io": false, 00:08:12.446 "nvme_io_md": false, 00:08:12.446 "write_zeroes": true, 00:08:12.446 "zcopy": true, 00:08:12.446 "get_zone_info": false, 00:08:12.446 "zone_management": false, 00:08:12.446 "zone_append": false, 00:08:12.446 "compare": false, 00:08:12.446 "compare_and_write": false, 00:08:12.446 "abort": true, 00:08:12.446 "seek_hole": false, 00:08:12.446 "seek_data": false, 00:08:12.446 "copy": true, 00:08:12.446 "nvme_iov_md": false 00:08:12.446 }, 00:08:12.446 "memory_domains": [ 00:08:12.446 { 00:08:12.446 "dma_device_id": "system", 00:08:12.446 "dma_device_type": 1 00:08:12.446 }, 00:08:12.446 { 00:08:12.446 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:12.446 "dma_device_type": 2 00:08:12.446 } 00:08:12.446 ], 00:08:12.446 "driver_specific": {} 00:08:12.446 } 00:08:12.446 ] 00:08:12.446 20:03:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.446 20:03:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:12.446 20:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:12.446 20:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:12.446 20:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:12.446 20:03:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.446 20:03:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:12.446 BaseBdev3 00:08:12.446 20:03:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.446 20:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:12.446 20:03:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:12.446 20:03:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:12.446 20:03:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:12.446 20:03:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:12.446 20:03:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:12.446 20:03:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:12.446 20:03:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.446 20:03:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:12.446 20:03:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.446 20:03:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:12.446 20:03:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.446 20:03:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:12.446 [ 00:08:12.446 { 00:08:12.446 "name": "BaseBdev3", 00:08:12.446 "aliases": [ 00:08:12.446 "6683d160-76b2-4c22-827e-9a8183d0c2a3" 00:08:12.446 ], 00:08:12.446 "product_name": "Malloc disk", 00:08:12.446 "block_size": 512, 00:08:12.446 "num_blocks": 65536, 00:08:12.447 "uuid": "6683d160-76b2-4c22-827e-9a8183d0c2a3", 00:08:12.447 "assigned_rate_limits": { 00:08:12.447 "rw_ios_per_sec": 0, 00:08:12.447 "rw_mbytes_per_sec": 0, 00:08:12.447 "r_mbytes_per_sec": 0, 00:08:12.447 "w_mbytes_per_sec": 0 00:08:12.447 }, 00:08:12.447 "claimed": false, 00:08:12.447 "zoned": false, 00:08:12.447 "supported_io_types": { 00:08:12.447 "read": true, 00:08:12.447 "write": true, 00:08:12.447 "unmap": true, 00:08:12.447 "flush": true, 00:08:12.447 "reset": true, 00:08:12.447 "nvme_admin": false, 00:08:12.447 "nvme_io": false, 00:08:12.447 "nvme_io_md": false, 00:08:12.447 "write_zeroes": true, 00:08:12.447 "zcopy": true, 00:08:12.447 "get_zone_info": false, 00:08:12.447 "zone_management": false, 00:08:12.447 "zone_append": false, 00:08:12.447 "compare": false, 00:08:12.447 "compare_and_write": false, 00:08:12.447 "abort": true, 00:08:12.447 "seek_hole": false, 00:08:12.447 "seek_data": false, 00:08:12.447 "copy": true, 00:08:12.447 "nvme_iov_md": false 00:08:12.447 }, 00:08:12.447 "memory_domains": [ 00:08:12.447 { 00:08:12.447 "dma_device_id": "system", 00:08:12.447 "dma_device_type": 1 00:08:12.447 }, 00:08:12.447 { 00:08:12.447 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:12.447 "dma_device_type": 2 00:08:12.447 } 00:08:12.447 ], 00:08:12.447 "driver_specific": {} 00:08:12.447 } 00:08:12.447 ] 00:08:12.447 20:03:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.447 20:03:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:12.447 20:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:12.447 20:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:12.447 20:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:12.447 20:03:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.447 20:03:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:12.447 [2024-12-08 20:03:44.307079] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:12.447 [2024-12-08 20:03:44.307174] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:12.447 [2024-12-08 20:03:44.307227] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:12.447 [2024-12-08 20:03:44.309030] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:12.447 20:03:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.447 20:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:12.447 20:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:12.447 20:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:12.447 20:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:12.447 20:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:12.447 20:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:12.447 20:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:12.447 20:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:12.447 20:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:12.447 20:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:12.447 20:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:12.447 20:03:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.447 20:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:12.447 20:03:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:12.447 20:03:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.447 20:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:12.447 "name": "Existed_Raid", 00:08:12.447 "uuid": "2d147937-55e0-49a3-a8ad-3c5ff7c6ac91", 00:08:12.447 "strip_size_kb": 64, 00:08:12.447 "state": "configuring", 00:08:12.447 "raid_level": "raid0", 00:08:12.447 "superblock": true, 00:08:12.447 "num_base_bdevs": 3, 00:08:12.447 "num_base_bdevs_discovered": 2, 00:08:12.447 "num_base_bdevs_operational": 3, 00:08:12.447 "base_bdevs_list": [ 00:08:12.447 { 00:08:12.447 "name": "BaseBdev1", 00:08:12.447 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:12.447 "is_configured": false, 00:08:12.447 "data_offset": 0, 00:08:12.447 "data_size": 0 00:08:12.447 }, 00:08:12.447 { 00:08:12.447 "name": "BaseBdev2", 00:08:12.447 "uuid": "d3d74515-0483-4a52-a17d-975536f2303c", 00:08:12.447 "is_configured": true, 00:08:12.447 "data_offset": 2048, 00:08:12.447 "data_size": 63488 00:08:12.447 }, 00:08:12.447 { 00:08:12.447 "name": "BaseBdev3", 00:08:12.447 "uuid": "6683d160-76b2-4c22-827e-9a8183d0c2a3", 00:08:12.447 "is_configured": true, 00:08:12.447 "data_offset": 2048, 00:08:12.447 "data_size": 63488 00:08:12.447 } 00:08:12.447 ] 00:08:12.447 }' 00:08:12.447 20:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:12.447 20:03:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:13.017 20:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:13.018 20:03:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.018 20:03:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:13.018 [2024-12-08 20:03:44.786330] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:13.018 20:03:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.018 20:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:13.018 20:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:13.018 20:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:13.018 20:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:13.018 20:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:13.018 20:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:13.018 20:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:13.018 20:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:13.018 20:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:13.018 20:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:13.018 20:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:13.018 20:03:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.018 20:03:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:13.018 20:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:13.018 20:03:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.018 20:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:13.018 "name": "Existed_Raid", 00:08:13.018 "uuid": "2d147937-55e0-49a3-a8ad-3c5ff7c6ac91", 00:08:13.018 "strip_size_kb": 64, 00:08:13.018 "state": "configuring", 00:08:13.018 "raid_level": "raid0", 00:08:13.018 "superblock": true, 00:08:13.018 "num_base_bdevs": 3, 00:08:13.018 "num_base_bdevs_discovered": 1, 00:08:13.018 "num_base_bdevs_operational": 3, 00:08:13.018 "base_bdevs_list": [ 00:08:13.018 { 00:08:13.018 "name": "BaseBdev1", 00:08:13.018 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:13.018 "is_configured": false, 00:08:13.018 "data_offset": 0, 00:08:13.018 "data_size": 0 00:08:13.018 }, 00:08:13.018 { 00:08:13.018 "name": null, 00:08:13.018 "uuid": "d3d74515-0483-4a52-a17d-975536f2303c", 00:08:13.018 "is_configured": false, 00:08:13.018 "data_offset": 0, 00:08:13.018 "data_size": 63488 00:08:13.018 }, 00:08:13.018 { 00:08:13.018 "name": "BaseBdev3", 00:08:13.018 "uuid": "6683d160-76b2-4c22-827e-9a8183d0c2a3", 00:08:13.018 "is_configured": true, 00:08:13.018 "data_offset": 2048, 00:08:13.018 "data_size": 63488 00:08:13.018 } 00:08:13.018 ] 00:08:13.018 }' 00:08:13.018 20:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:13.018 20:03:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:13.277 20:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:13.277 20:03:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.277 20:03:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:13.277 20:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:13.277 20:03:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.538 20:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:13.538 20:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:13.538 20:03:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.538 20:03:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:13.538 [2024-12-08 20:03:45.329286] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:13.538 BaseBdev1 00:08:13.538 20:03:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.538 20:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:13.538 20:03:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:13.538 20:03:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:13.538 20:03:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:13.538 20:03:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:13.538 20:03:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:13.538 20:03:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:13.538 20:03:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.538 20:03:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:13.538 20:03:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.538 20:03:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:13.538 20:03:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.538 20:03:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:13.538 [ 00:08:13.538 { 00:08:13.539 "name": "BaseBdev1", 00:08:13.539 "aliases": [ 00:08:13.539 "ca4062f9-5014-4820-a3ce-7a251a141c5c" 00:08:13.539 ], 00:08:13.539 "product_name": "Malloc disk", 00:08:13.539 "block_size": 512, 00:08:13.539 "num_blocks": 65536, 00:08:13.539 "uuid": "ca4062f9-5014-4820-a3ce-7a251a141c5c", 00:08:13.539 "assigned_rate_limits": { 00:08:13.539 "rw_ios_per_sec": 0, 00:08:13.539 "rw_mbytes_per_sec": 0, 00:08:13.539 "r_mbytes_per_sec": 0, 00:08:13.539 "w_mbytes_per_sec": 0 00:08:13.539 }, 00:08:13.539 "claimed": true, 00:08:13.539 "claim_type": "exclusive_write", 00:08:13.539 "zoned": false, 00:08:13.539 "supported_io_types": { 00:08:13.539 "read": true, 00:08:13.539 "write": true, 00:08:13.539 "unmap": true, 00:08:13.539 "flush": true, 00:08:13.539 "reset": true, 00:08:13.539 "nvme_admin": false, 00:08:13.539 "nvme_io": false, 00:08:13.539 "nvme_io_md": false, 00:08:13.539 "write_zeroes": true, 00:08:13.539 "zcopy": true, 00:08:13.539 "get_zone_info": false, 00:08:13.539 "zone_management": false, 00:08:13.539 "zone_append": false, 00:08:13.539 "compare": false, 00:08:13.539 "compare_and_write": false, 00:08:13.539 "abort": true, 00:08:13.539 "seek_hole": false, 00:08:13.539 "seek_data": false, 00:08:13.539 "copy": true, 00:08:13.539 "nvme_iov_md": false 00:08:13.539 }, 00:08:13.539 "memory_domains": [ 00:08:13.539 { 00:08:13.539 "dma_device_id": "system", 00:08:13.539 "dma_device_type": 1 00:08:13.539 }, 00:08:13.539 { 00:08:13.539 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:13.539 "dma_device_type": 2 00:08:13.539 } 00:08:13.539 ], 00:08:13.539 "driver_specific": {} 00:08:13.539 } 00:08:13.539 ] 00:08:13.539 20:03:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.539 20:03:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:13.539 20:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:13.539 20:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:13.539 20:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:13.539 20:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:13.539 20:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:13.539 20:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:13.539 20:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:13.539 20:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:13.539 20:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:13.539 20:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:13.539 20:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:13.539 20:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:13.539 20:03:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.539 20:03:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:13.539 20:03:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.539 20:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:13.539 "name": "Existed_Raid", 00:08:13.539 "uuid": "2d147937-55e0-49a3-a8ad-3c5ff7c6ac91", 00:08:13.539 "strip_size_kb": 64, 00:08:13.539 "state": "configuring", 00:08:13.539 "raid_level": "raid0", 00:08:13.539 "superblock": true, 00:08:13.539 "num_base_bdevs": 3, 00:08:13.539 "num_base_bdevs_discovered": 2, 00:08:13.539 "num_base_bdevs_operational": 3, 00:08:13.539 "base_bdevs_list": [ 00:08:13.539 { 00:08:13.539 "name": "BaseBdev1", 00:08:13.539 "uuid": "ca4062f9-5014-4820-a3ce-7a251a141c5c", 00:08:13.539 "is_configured": true, 00:08:13.539 "data_offset": 2048, 00:08:13.539 "data_size": 63488 00:08:13.539 }, 00:08:13.539 { 00:08:13.539 "name": null, 00:08:13.539 "uuid": "d3d74515-0483-4a52-a17d-975536f2303c", 00:08:13.539 "is_configured": false, 00:08:13.539 "data_offset": 0, 00:08:13.539 "data_size": 63488 00:08:13.539 }, 00:08:13.539 { 00:08:13.539 "name": "BaseBdev3", 00:08:13.539 "uuid": "6683d160-76b2-4c22-827e-9a8183d0c2a3", 00:08:13.539 "is_configured": true, 00:08:13.539 "data_offset": 2048, 00:08:13.539 "data_size": 63488 00:08:13.539 } 00:08:13.539 ] 00:08:13.539 }' 00:08:13.539 20:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:13.539 20:03:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:13.799 20:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:13.799 20:03:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.799 20:03:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:13.799 20:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:14.057 20:03:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.057 20:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:14.057 20:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:14.057 20:03:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.057 20:03:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:14.057 [2024-12-08 20:03:45.820523] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:14.057 20:03:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.057 20:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:14.057 20:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:14.057 20:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:14.057 20:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:14.057 20:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:14.057 20:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:14.057 20:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:14.057 20:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:14.057 20:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:14.057 20:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:14.057 20:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:14.057 20:03:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.057 20:03:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:14.057 20:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:14.057 20:03:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.057 20:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:14.057 "name": "Existed_Raid", 00:08:14.057 "uuid": "2d147937-55e0-49a3-a8ad-3c5ff7c6ac91", 00:08:14.057 "strip_size_kb": 64, 00:08:14.057 "state": "configuring", 00:08:14.057 "raid_level": "raid0", 00:08:14.057 "superblock": true, 00:08:14.057 "num_base_bdevs": 3, 00:08:14.057 "num_base_bdevs_discovered": 1, 00:08:14.057 "num_base_bdevs_operational": 3, 00:08:14.057 "base_bdevs_list": [ 00:08:14.057 { 00:08:14.057 "name": "BaseBdev1", 00:08:14.057 "uuid": "ca4062f9-5014-4820-a3ce-7a251a141c5c", 00:08:14.057 "is_configured": true, 00:08:14.057 "data_offset": 2048, 00:08:14.057 "data_size": 63488 00:08:14.057 }, 00:08:14.057 { 00:08:14.057 "name": null, 00:08:14.057 "uuid": "d3d74515-0483-4a52-a17d-975536f2303c", 00:08:14.057 "is_configured": false, 00:08:14.057 "data_offset": 0, 00:08:14.057 "data_size": 63488 00:08:14.057 }, 00:08:14.057 { 00:08:14.057 "name": null, 00:08:14.057 "uuid": "6683d160-76b2-4c22-827e-9a8183d0c2a3", 00:08:14.057 "is_configured": false, 00:08:14.057 "data_offset": 0, 00:08:14.057 "data_size": 63488 00:08:14.057 } 00:08:14.057 ] 00:08:14.057 }' 00:08:14.057 20:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:14.057 20:03:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:14.316 20:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:14.316 20:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:14.316 20:03:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.316 20:03:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:14.316 20:03:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.316 20:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:14.316 20:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:14.316 20:03:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.316 20:03:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:14.316 [2024-12-08 20:03:46.279797] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:14.316 20:03:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.316 20:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:14.316 20:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:14.316 20:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:14.316 20:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:14.316 20:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:14.316 20:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:14.316 20:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:14.316 20:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:14.316 20:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:14.316 20:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:14.316 20:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:14.316 20:03:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.316 20:03:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:14.575 20:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:14.575 20:03:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.575 20:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:14.575 "name": "Existed_Raid", 00:08:14.575 "uuid": "2d147937-55e0-49a3-a8ad-3c5ff7c6ac91", 00:08:14.575 "strip_size_kb": 64, 00:08:14.575 "state": "configuring", 00:08:14.575 "raid_level": "raid0", 00:08:14.575 "superblock": true, 00:08:14.575 "num_base_bdevs": 3, 00:08:14.575 "num_base_bdevs_discovered": 2, 00:08:14.575 "num_base_bdevs_operational": 3, 00:08:14.575 "base_bdevs_list": [ 00:08:14.575 { 00:08:14.575 "name": "BaseBdev1", 00:08:14.575 "uuid": "ca4062f9-5014-4820-a3ce-7a251a141c5c", 00:08:14.575 "is_configured": true, 00:08:14.575 "data_offset": 2048, 00:08:14.575 "data_size": 63488 00:08:14.575 }, 00:08:14.575 { 00:08:14.575 "name": null, 00:08:14.575 "uuid": "d3d74515-0483-4a52-a17d-975536f2303c", 00:08:14.575 "is_configured": false, 00:08:14.575 "data_offset": 0, 00:08:14.575 "data_size": 63488 00:08:14.575 }, 00:08:14.575 { 00:08:14.575 "name": "BaseBdev3", 00:08:14.575 "uuid": "6683d160-76b2-4c22-827e-9a8183d0c2a3", 00:08:14.575 "is_configured": true, 00:08:14.575 "data_offset": 2048, 00:08:14.575 "data_size": 63488 00:08:14.575 } 00:08:14.575 ] 00:08:14.575 }' 00:08:14.575 20:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:14.575 20:03:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:14.835 20:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:14.835 20:03:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.835 20:03:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:14.835 20:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:14.835 20:03:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.835 20:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:14.835 20:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:14.835 20:03:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.835 20:03:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:14.835 [2024-12-08 20:03:46.731088] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:15.095 20:03:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.095 20:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:15.095 20:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:15.095 20:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:15.095 20:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:15.095 20:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:15.095 20:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:15.095 20:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:15.095 20:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:15.095 20:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:15.095 20:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:15.095 20:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:15.095 20:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:15.095 20:03:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.095 20:03:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:15.095 20:03:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.095 20:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:15.095 "name": "Existed_Raid", 00:08:15.095 "uuid": "2d147937-55e0-49a3-a8ad-3c5ff7c6ac91", 00:08:15.095 "strip_size_kb": 64, 00:08:15.095 "state": "configuring", 00:08:15.095 "raid_level": "raid0", 00:08:15.095 "superblock": true, 00:08:15.095 "num_base_bdevs": 3, 00:08:15.095 "num_base_bdevs_discovered": 1, 00:08:15.095 "num_base_bdevs_operational": 3, 00:08:15.095 "base_bdevs_list": [ 00:08:15.095 { 00:08:15.095 "name": null, 00:08:15.095 "uuid": "ca4062f9-5014-4820-a3ce-7a251a141c5c", 00:08:15.095 "is_configured": false, 00:08:15.095 "data_offset": 0, 00:08:15.095 "data_size": 63488 00:08:15.095 }, 00:08:15.095 { 00:08:15.095 "name": null, 00:08:15.095 "uuid": "d3d74515-0483-4a52-a17d-975536f2303c", 00:08:15.095 "is_configured": false, 00:08:15.095 "data_offset": 0, 00:08:15.095 "data_size": 63488 00:08:15.095 }, 00:08:15.095 { 00:08:15.095 "name": "BaseBdev3", 00:08:15.095 "uuid": "6683d160-76b2-4c22-827e-9a8183d0c2a3", 00:08:15.095 "is_configured": true, 00:08:15.095 "data_offset": 2048, 00:08:15.095 "data_size": 63488 00:08:15.095 } 00:08:15.095 ] 00:08:15.095 }' 00:08:15.095 20:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:15.095 20:03:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:15.355 20:03:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:15.355 20:03:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.355 20:03:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:15.355 20:03:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:15.355 20:03:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.355 20:03:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:15.355 20:03:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:15.355 20:03:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.355 20:03:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:15.355 [2024-12-08 20:03:47.258857] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:15.355 20:03:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.355 20:03:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:15.355 20:03:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:15.355 20:03:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:15.355 20:03:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:15.355 20:03:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:15.355 20:03:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:15.355 20:03:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:15.355 20:03:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:15.355 20:03:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:15.355 20:03:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:15.355 20:03:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:15.355 20:03:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.355 20:03:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:15.355 20:03:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:15.355 20:03:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.355 20:03:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:15.355 "name": "Existed_Raid", 00:08:15.355 "uuid": "2d147937-55e0-49a3-a8ad-3c5ff7c6ac91", 00:08:15.355 "strip_size_kb": 64, 00:08:15.355 "state": "configuring", 00:08:15.355 "raid_level": "raid0", 00:08:15.355 "superblock": true, 00:08:15.355 "num_base_bdevs": 3, 00:08:15.355 "num_base_bdevs_discovered": 2, 00:08:15.355 "num_base_bdevs_operational": 3, 00:08:15.355 "base_bdevs_list": [ 00:08:15.355 { 00:08:15.355 "name": null, 00:08:15.355 "uuid": "ca4062f9-5014-4820-a3ce-7a251a141c5c", 00:08:15.355 "is_configured": false, 00:08:15.355 "data_offset": 0, 00:08:15.355 "data_size": 63488 00:08:15.355 }, 00:08:15.355 { 00:08:15.355 "name": "BaseBdev2", 00:08:15.355 "uuid": "d3d74515-0483-4a52-a17d-975536f2303c", 00:08:15.355 "is_configured": true, 00:08:15.355 "data_offset": 2048, 00:08:15.355 "data_size": 63488 00:08:15.355 }, 00:08:15.355 { 00:08:15.355 "name": "BaseBdev3", 00:08:15.355 "uuid": "6683d160-76b2-4c22-827e-9a8183d0c2a3", 00:08:15.355 "is_configured": true, 00:08:15.355 "data_offset": 2048, 00:08:15.355 "data_size": 63488 00:08:15.355 } 00:08:15.355 ] 00:08:15.355 }' 00:08:15.355 20:03:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:15.355 20:03:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:15.925 20:03:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:15.925 20:03:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:15.925 20:03:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.925 20:03:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:15.925 20:03:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.925 20:03:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:15.925 20:03:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:15.925 20:03:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:15.925 20:03:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.925 20:03:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:15.925 20:03:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.925 20:03:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u ca4062f9-5014-4820-a3ce-7a251a141c5c 00:08:15.925 20:03:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.925 20:03:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:15.925 [2024-12-08 20:03:47.782087] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:15.925 [2024-12-08 20:03:47.782290] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:15.925 [2024-12-08 20:03:47.782307] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:15.925 [2024-12-08 20:03:47.782539] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:15.925 [2024-12-08 20:03:47.782699] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:15.925 [2024-12-08 20:03:47.782709] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:08:15.925 [2024-12-08 20:03:47.782837] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:15.925 NewBaseBdev 00:08:15.925 20:03:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.925 20:03:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:15.925 20:03:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:08:15.925 20:03:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:15.925 20:03:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:15.925 20:03:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:15.925 20:03:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:15.925 20:03:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:15.925 20:03:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.925 20:03:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:15.925 20:03:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.925 20:03:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:15.925 20:03:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.925 20:03:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:15.925 [ 00:08:15.925 { 00:08:15.925 "name": "NewBaseBdev", 00:08:15.925 "aliases": [ 00:08:15.925 "ca4062f9-5014-4820-a3ce-7a251a141c5c" 00:08:15.925 ], 00:08:15.925 "product_name": "Malloc disk", 00:08:15.925 "block_size": 512, 00:08:15.925 "num_blocks": 65536, 00:08:15.925 "uuid": "ca4062f9-5014-4820-a3ce-7a251a141c5c", 00:08:15.925 "assigned_rate_limits": { 00:08:15.925 "rw_ios_per_sec": 0, 00:08:15.925 "rw_mbytes_per_sec": 0, 00:08:15.925 "r_mbytes_per_sec": 0, 00:08:15.925 "w_mbytes_per_sec": 0 00:08:15.925 }, 00:08:15.925 "claimed": true, 00:08:15.925 "claim_type": "exclusive_write", 00:08:15.925 "zoned": false, 00:08:15.925 "supported_io_types": { 00:08:15.925 "read": true, 00:08:15.925 "write": true, 00:08:15.925 "unmap": true, 00:08:15.925 "flush": true, 00:08:15.925 "reset": true, 00:08:15.925 "nvme_admin": false, 00:08:15.926 "nvme_io": false, 00:08:15.926 "nvme_io_md": false, 00:08:15.926 "write_zeroes": true, 00:08:15.926 "zcopy": true, 00:08:15.926 "get_zone_info": false, 00:08:15.926 "zone_management": false, 00:08:15.926 "zone_append": false, 00:08:15.926 "compare": false, 00:08:15.926 "compare_and_write": false, 00:08:15.926 "abort": true, 00:08:15.926 "seek_hole": false, 00:08:15.926 "seek_data": false, 00:08:15.926 "copy": true, 00:08:15.926 "nvme_iov_md": false 00:08:15.926 }, 00:08:15.926 "memory_domains": [ 00:08:15.926 { 00:08:15.926 "dma_device_id": "system", 00:08:15.926 "dma_device_type": 1 00:08:15.926 }, 00:08:15.926 { 00:08:15.926 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:15.926 "dma_device_type": 2 00:08:15.926 } 00:08:15.926 ], 00:08:15.926 "driver_specific": {} 00:08:15.926 } 00:08:15.926 ] 00:08:15.926 20:03:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.926 20:03:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:15.926 20:03:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:15.926 20:03:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:15.926 20:03:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:15.926 20:03:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:15.926 20:03:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:15.926 20:03:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:15.926 20:03:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:15.926 20:03:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:15.926 20:03:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:15.926 20:03:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:15.926 20:03:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:15.926 20:03:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.926 20:03:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:15.926 20:03:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:15.926 20:03:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.926 20:03:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:15.926 "name": "Existed_Raid", 00:08:15.926 "uuid": "2d147937-55e0-49a3-a8ad-3c5ff7c6ac91", 00:08:15.926 "strip_size_kb": 64, 00:08:15.926 "state": "online", 00:08:15.926 "raid_level": "raid0", 00:08:15.926 "superblock": true, 00:08:15.926 "num_base_bdevs": 3, 00:08:15.926 "num_base_bdevs_discovered": 3, 00:08:15.926 "num_base_bdevs_operational": 3, 00:08:15.926 "base_bdevs_list": [ 00:08:15.926 { 00:08:15.926 "name": "NewBaseBdev", 00:08:15.926 "uuid": "ca4062f9-5014-4820-a3ce-7a251a141c5c", 00:08:15.926 "is_configured": true, 00:08:15.926 "data_offset": 2048, 00:08:15.926 "data_size": 63488 00:08:15.926 }, 00:08:15.926 { 00:08:15.926 "name": "BaseBdev2", 00:08:15.926 "uuid": "d3d74515-0483-4a52-a17d-975536f2303c", 00:08:15.926 "is_configured": true, 00:08:15.926 "data_offset": 2048, 00:08:15.926 "data_size": 63488 00:08:15.926 }, 00:08:15.926 { 00:08:15.926 "name": "BaseBdev3", 00:08:15.926 "uuid": "6683d160-76b2-4c22-827e-9a8183d0c2a3", 00:08:15.926 "is_configured": true, 00:08:15.926 "data_offset": 2048, 00:08:15.926 "data_size": 63488 00:08:15.926 } 00:08:15.926 ] 00:08:15.926 }' 00:08:15.926 20:03:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:15.926 20:03:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.497 20:03:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:16.497 20:03:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:16.497 20:03:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:16.497 20:03:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:16.497 20:03:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:16.497 20:03:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:16.497 20:03:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:16.497 20:03:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:16.497 20:03:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.497 20:03:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.497 [2024-12-08 20:03:48.221709] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:16.497 20:03:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.497 20:03:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:16.497 "name": "Existed_Raid", 00:08:16.497 "aliases": [ 00:08:16.497 "2d147937-55e0-49a3-a8ad-3c5ff7c6ac91" 00:08:16.497 ], 00:08:16.497 "product_name": "Raid Volume", 00:08:16.497 "block_size": 512, 00:08:16.497 "num_blocks": 190464, 00:08:16.497 "uuid": "2d147937-55e0-49a3-a8ad-3c5ff7c6ac91", 00:08:16.497 "assigned_rate_limits": { 00:08:16.497 "rw_ios_per_sec": 0, 00:08:16.497 "rw_mbytes_per_sec": 0, 00:08:16.497 "r_mbytes_per_sec": 0, 00:08:16.497 "w_mbytes_per_sec": 0 00:08:16.497 }, 00:08:16.497 "claimed": false, 00:08:16.497 "zoned": false, 00:08:16.497 "supported_io_types": { 00:08:16.497 "read": true, 00:08:16.497 "write": true, 00:08:16.497 "unmap": true, 00:08:16.497 "flush": true, 00:08:16.497 "reset": true, 00:08:16.497 "nvme_admin": false, 00:08:16.497 "nvme_io": false, 00:08:16.497 "nvme_io_md": false, 00:08:16.497 "write_zeroes": true, 00:08:16.497 "zcopy": false, 00:08:16.497 "get_zone_info": false, 00:08:16.497 "zone_management": false, 00:08:16.497 "zone_append": false, 00:08:16.497 "compare": false, 00:08:16.497 "compare_and_write": false, 00:08:16.497 "abort": false, 00:08:16.497 "seek_hole": false, 00:08:16.497 "seek_data": false, 00:08:16.497 "copy": false, 00:08:16.497 "nvme_iov_md": false 00:08:16.497 }, 00:08:16.497 "memory_domains": [ 00:08:16.497 { 00:08:16.497 "dma_device_id": "system", 00:08:16.497 "dma_device_type": 1 00:08:16.497 }, 00:08:16.497 { 00:08:16.497 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:16.497 "dma_device_type": 2 00:08:16.497 }, 00:08:16.497 { 00:08:16.497 "dma_device_id": "system", 00:08:16.497 "dma_device_type": 1 00:08:16.497 }, 00:08:16.497 { 00:08:16.497 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:16.497 "dma_device_type": 2 00:08:16.497 }, 00:08:16.497 { 00:08:16.497 "dma_device_id": "system", 00:08:16.497 "dma_device_type": 1 00:08:16.497 }, 00:08:16.497 { 00:08:16.497 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:16.497 "dma_device_type": 2 00:08:16.497 } 00:08:16.497 ], 00:08:16.497 "driver_specific": { 00:08:16.497 "raid": { 00:08:16.497 "uuid": "2d147937-55e0-49a3-a8ad-3c5ff7c6ac91", 00:08:16.497 "strip_size_kb": 64, 00:08:16.497 "state": "online", 00:08:16.497 "raid_level": "raid0", 00:08:16.497 "superblock": true, 00:08:16.497 "num_base_bdevs": 3, 00:08:16.497 "num_base_bdevs_discovered": 3, 00:08:16.497 "num_base_bdevs_operational": 3, 00:08:16.497 "base_bdevs_list": [ 00:08:16.497 { 00:08:16.497 "name": "NewBaseBdev", 00:08:16.497 "uuid": "ca4062f9-5014-4820-a3ce-7a251a141c5c", 00:08:16.497 "is_configured": true, 00:08:16.497 "data_offset": 2048, 00:08:16.497 "data_size": 63488 00:08:16.497 }, 00:08:16.497 { 00:08:16.497 "name": "BaseBdev2", 00:08:16.497 "uuid": "d3d74515-0483-4a52-a17d-975536f2303c", 00:08:16.497 "is_configured": true, 00:08:16.497 "data_offset": 2048, 00:08:16.497 "data_size": 63488 00:08:16.497 }, 00:08:16.497 { 00:08:16.497 "name": "BaseBdev3", 00:08:16.497 "uuid": "6683d160-76b2-4c22-827e-9a8183d0c2a3", 00:08:16.497 "is_configured": true, 00:08:16.497 "data_offset": 2048, 00:08:16.497 "data_size": 63488 00:08:16.497 } 00:08:16.497 ] 00:08:16.497 } 00:08:16.497 } 00:08:16.497 }' 00:08:16.497 20:03:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:16.497 20:03:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:16.497 BaseBdev2 00:08:16.497 BaseBdev3' 00:08:16.497 20:03:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:16.497 20:03:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:16.497 20:03:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:16.497 20:03:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:16.497 20:03:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:16.497 20:03:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.497 20:03:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.497 20:03:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.497 20:03:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:16.497 20:03:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:16.497 20:03:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:16.497 20:03:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:16.497 20:03:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:16.497 20:03:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.497 20:03:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.497 20:03:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.497 20:03:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:16.497 20:03:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:16.497 20:03:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:16.498 20:03:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:16.498 20:03:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:16.498 20:03:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.498 20:03:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.498 20:03:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.795 20:03:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:16.795 20:03:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:16.795 20:03:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:16.795 20:03:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.795 20:03:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.795 [2024-12-08 20:03:48.500941] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:16.795 [2024-12-08 20:03:48.500982] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:16.795 [2024-12-08 20:03:48.501067] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:16.795 [2024-12-08 20:03:48.501123] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:16.795 [2024-12-08 20:03:48.501135] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:08:16.795 20:03:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.795 20:03:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 64283 00:08:16.795 20:03:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 64283 ']' 00:08:16.795 20:03:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 64283 00:08:16.795 20:03:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:08:16.795 20:03:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:16.795 20:03:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64283 00:08:16.795 20:03:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:16.795 20:03:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:16.795 20:03:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64283' 00:08:16.795 killing process with pid 64283 00:08:16.795 20:03:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 64283 00:08:16.795 [2024-12-08 20:03:48.546428] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:16.795 20:03:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 64283 00:08:17.063 [2024-12-08 20:03:48.841643] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:18.003 ************************************ 00:08:18.003 END TEST raid_state_function_test_sb 00:08:18.003 ************************************ 00:08:18.003 20:03:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:18.003 00:08:18.003 real 0m10.246s 00:08:18.003 user 0m16.293s 00:08:18.003 sys 0m1.737s 00:08:18.003 20:03:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:18.003 20:03:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:18.003 20:03:49 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:08:18.003 20:03:49 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:18.003 20:03:49 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:18.003 20:03:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:18.263 ************************************ 00:08:18.263 START TEST raid_superblock_test 00:08:18.263 ************************************ 00:08:18.263 20:03:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 3 00:08:18.263 20:03:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:08:18.263 20:03:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:08:18.263 20:03:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:18.263 20:03:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:18.263 20:03:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:18.263 20:03:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:18.263 20:03:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:18.263 20:03:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:18.263 20:03:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:18.263 20:03:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:18.263 20:03:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:18.263 20:03:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:18.263 20:03:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:18.263 20:03:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:08:18.263 20:03:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:08:18.263 20:03:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:08:18.263 20:03:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=64903 00:08:18.263 20:03:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:18.263 20:03:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 64903 00:08:18.263 20:03:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 64903 ']' 00:08:18.263 20:03:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:18.263 20:03:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:18.263 20:03:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:18.263 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:18.263 20:03:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:18.263 20:03:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.263 [2024-12-08 20:03:50.075063] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:08:18.263 [2024-12-08 20:03:50.075279] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64903 ] 00:08:18.263 [2024-12-08 20:03:50.228506] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:18.523 [2024-12-08 20:03:50.340584] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.781 [2024-12-08 20:03:50.538554] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:18.781 [2024-12-08 20:03:50.538634] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:19.041 20:03:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:19.041 20:03:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:08:19.041 20:03:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:08:19.041 20:03:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:19.041 20:03:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:08:19.041 20:03:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:08:19.041 20:03:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:19.041 20:03:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:19.041 20:03:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:19.041 20:03:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:19.041 20:03:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:08:19.041 20:03:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.041 20:03:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.041 malloc1 00:08:19.041 20:03:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.041 20:03:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:19.041 20:03:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.041 20:03:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.041 [2024-12-08 20:03:50.946557] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:19.041 [2024-12-08 20:03:50.946615] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:19.041 [2024-12-08 20:03:50.946636] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:19.041 [2024-12-08 20:03:50.946645] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:19.041 [2024-12-08 20:03:50.948776] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:19.041 [2024-12-08 20:03:50.948823] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:19.041 pt1 00:08:19.041 20:03:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.041 20:03:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:19.041 20:03:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:19.041 20:03:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:08:19.042 20:03:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:08:19.042 20:03:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:19.042 20:03:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:19.042 20:03:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:19.042 20:03:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:19.042 20:03:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:08:19.042 20:03:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.042 20:03:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.042 malloc2 00:08:19.042 20:03:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.042 20:03:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:19.042 20:03:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.042 20:03:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.042 [2024-12-08 20:03:51.000084] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:19.042 [2024-12-08 20:03:51.000175] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:19.042 [2024-12-08 20:03:51.000218] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:19.042 [2024-12-08 20:03:51.000268] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:19.042 [2024-12-08 20:03:51.002278] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:19.042 [2024-12-08 20:03:51.002346] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:19.042 pt2 00:08:19.042 20:03:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.042 20:03:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:19.042 20:03:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:19.042 20:03:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:08:19.042 20:03:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:08:19.042 20:03:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:08:19.042 20:03:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:19.042 20:03:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:19.042 20:03:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:19.042 20:03:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:08:19.042 20:03:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.042 20:03:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.301 malloc3 00:08:19.301 20:03:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.301 20:03:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:19.301 20:03:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.301 20:03:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.301 [2024-12-08 20:03:51.071843] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:19.301 [2024-12-08 20:03:51.071953] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:19.301 [2024-12-08 20:03:51.071998] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:08:19.301 [2024-12-08 20:03:51.072030] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:19.301 [2024-12-08 20:03:51.074180] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:19.301 [2024-12-08 20:03:51.074248] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:19.301 pt3 00:08:19.301 20:03:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.301 20:03:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:19.301 20:03:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:19.301 20:03:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:08:19.301 20:03:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.301 20:03:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.301 [2024-12-08 20:03:51.083876] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:19.301 [2024-12-08 20:03:51.085704] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:19.301 [2024-12-08 20:03:51.085773] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:19.301 [2024-12-08 20:03:51.085933] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:19.301 [2024-12-08 20:03:51.085962] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:19.301 [2024-12-08 20:03:51.086235] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:19.301 [2024-12-08 20:03:51.086393] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:19.301 [2024-12-08 20:03:51.086408] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:08:19.301 [2024-12-08 20:03:51.086603] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:19.302 20:03:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.302 20:03:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:19.302 20:03:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:19.302 20:03:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:19.302 20:03:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:19.302 20:03:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:19.302 20:03:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:19.302 20:03:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:19.302 20:03:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:19.302 20:03:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:19.302 20:03:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:19.302 20:03:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:19.302 20:03:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:19.302 20:03:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.302 20:03:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.302 20:03:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.302 20:03:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:19.302 "name": "raid_bdev1", 00:08:19.302 "uuid": "955ad663-b041-46b3-a92a-3322ee01bd4f", 00:08:19.302 "strip_size_kb": 64, 00:08:19.302 "state": "online", 00:08:19.302 "raid_level": "raid0", 00:08:19.302 "superblock": true, 00:08:19.302 "num_base_bdevs": 3, 00:08:19.302 "num_base_bdevs_discovered": 3, 00:08:19.302 "num_base_bdevs_operational": 3, 00:08:19.302 "base_bdevs_list": [ 00:08:19.302 { 00:08:19.302 "name": "pt1", 00:08:19.302 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:19.302 "is_configured": true, 00:08:19.302 "data_offset": 2048, 00:08:19.302 "data_size": 63488 00:08:19.302 }, 00:08:19.302 { 00:08:19.302 "name": "pt2", 00:08:19.302 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:19.302 "is_configured": true, 00:08:19.302 "data_offset": 2048, 00:08:19.302 "data_size": 63488 00:08:19.302 }, 00:08:19.302 { 00:08:19.302 "name": "pt3", 00:08:19.302 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:19.302 "is_configured": true, 00:08:19.302 "data_offset": 2048, 00:08:19.302 "data_size": 63488 00:08:19.302 } 00:08:19.302 ] 00:08:19.302 }' 00:08:19.302 20:03:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:19.302 20:03:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.561 20:03:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:08:19.561 20:03:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:19.561 20:03:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:19.561 20:03:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:19.561 20:03:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:19.561 20:03:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:19.561 20:03:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:19.561 20:03:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.561 20:03:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.561 20:03:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:19.561 [2024-12-08 20:03:51.515446] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:19.561 20:03:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.820 20:03:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:19.820 "name": "raid_bdev1", 00:08:19.820 "aliases": [ 00:08:19.820 "955ad663-b041-46b3-a92a-3322ee01bd4f" 00:08:19.820 ], 00:08:19.820 "product_name": "Raid Volume", 00:08:19.820 "block_size": 512, 00:08:19.820 "num_blocks": 190464, 00:08:19.820 "uuid": "955ad663-b041-46b3-a92a-3322ee01bd4f", 00:08:19.820 "assigned_rate_limits": { 00:08:19.820 "rw_ios_per_sec": 0, 00:08:19.820 "rw_mbytes_per_sec": 0, 00:08:19.820 "r_mbytes_per_sec": 0, 00:08:19.820 "w_mbytes_per_sec": 0 00:08:19.820 }, 00:08:19.820 "claimed": false, 00:08:19.820 "zoned": false, 00:08:19.820 "supported_io_types": { 00:08:19.820 "read": true, 00:08:19.820 "write": true, 00:08:19.820 "unmap": true, 00:08:19.820 "flush": true, 00:08:19.820 "reset": true, 00:08:19.820 "nvme_admin": false, 00:08:19.820 "nvme_io": false, 00:08:19.820 "nvme_io_md": false, 00:08:19.820 "write_zeroes": true, 00:08:19.820 "zcopy": false, 00:08:19.820 "get_zone_info": false, 00:08:19.820 "zone_management": false, 00:08:19.820 "zone_append": false, 00:08:19.820 "compare": false, 00:08:19.820 "compare_and_write": false, 00:08:19.820 "abort": false, 00:08:19.820 "seek_hole": false, 00:08:19.820 "seek_data": false, 00:08:19.820 "copy": false, 00:08:19.820 "nvme_iov_md": false 00:08:19.820 }, 00:08:19.820 "memory_domains": [ 00:08:19.820 { 00:08:19.820 "dma_device_id": "system", 00:08:19.820 "dma_device_type": 1 00:08:19.820 }, 00:08:19.820 { 00:08:19.820 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:19.820 "dma_device_type": 2 00:08:19.820 }, 00:08:19.820 { 00:08:19.820 "dma_device_id": "system", 00:08:19.820 "dma_device_type": 1 00:08:19.820 }, 00:08:19.820 { 00:08:19.820 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:19.820 "dma_device_type": 2 00:08:19.820 }, 00:08:19.820 { 00:08:19.820 "dma_device_id": "system", 00:08:19.820 "dma_device_type": 1 00:08:19.820 }, 00:08:19.820 { 00:08:19.820 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:19.820 "dma_device_type": 2 00:08:19.820 } 00:08:19.820 ], 00:08:19.820 "driver_specific": { 00:08:19.820 "raid": { 00:08:19.820 "uuid": "955ad663-b041-46b3-a92a-3322ee01bd4f", 00:08:19.820 "strip_size_kb": 64, 00:08:19.820 "state": "online", 00:08:19.820 "raid_level": "raid0", 00:08:19.820 "superblock": true, 00:08:19.820 "num_base_bdevs": 3, 00:08:19.820 "num_base_bdevs_discovered": 3, 00:08:19.820 "num_base_bdevs_operational": 3, 00:08:19.820 "base_bdevs_list": [ 00:08:19.820 { 00:08:19.820 "name": "pt1", 00:08:19.820 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:19.820 "is_configured": true, 00:08:19.820 "data_offset": 2048, 00:08:19.820 "data_size": 63488 00:08:19.820 }, 00:08:19.820 { 00:08:19.820 "name": "pt2", 00:08:19.820 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:19.820 "is_configured": true, 00:08:19.820 "data_offset": 2048, 00:08:19.820 "data_size": 63488 00:08:19.820 }, 00:08:19.820 { 00:08:19.820 "name": "pt3", 00:08:19.820 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:19.820 "is_configured": true, 00:08:19.820 "data_offset": 2048, 00:08:19.820 "data_size": 63488 00:08:19.820 } 00:08:19.820 ] 00:08:19.820 } 00:08:19.820 } 00:08:19.820 }' 00:08:19.820 20:03:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:19.820 20:03:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:19.820 pt2 00:08:19.820 pt3' 00:08:19.820 20:03:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:19.820 20:03:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:19.820 20:03:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:19.820 20:03:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:19.820 20:03:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:19.820 20:03:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.820 20:03:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.820 20:03:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.820 20:03:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:19.820 20:03:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:19.820 20:03:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:19.820 20:03:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:19.820 20:03:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.820 20:03:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.820 20:03:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:19.820 20:03:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.820 20:03:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:19.820 20:03:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:19.820 20:03:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:19.820 20:03:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:08:19.820 20:03:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:19.820 20:03:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.820 20:03:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.820 20:03:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.820 20:03:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:19.820 20:03:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:20.110 20:03:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:08:20.110 20:03:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:20.110 20:03:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.110 20:03:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.110 [2024-12-08 20:03:51.802880] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:20.110 20:03:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.110 20:03:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=955ad663-b041-46b3-a92a-3322ee01bd4f 00:08:20.110 20:03:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 955ad663-b041-46b3-a92a-3322ee01bd4f ']' 00:08:20.110 20:03:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:20.110 20:03:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.110 20:03:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.110 [2024-12-08 20:03:51.850521] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:20.110 [2024-12-08 20:03:51.850588] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:20.110 [2024-12-08 20:03:51.850678] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:20.110 [2024-12-08 20:03:51.850754] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:20.110 [2024-12-08 20:03:51.850788] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:08:20.110 20:03:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.110 20:03:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:20.110 20:03:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:08:20.110 20:03:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.110 20:03:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.110 20:03:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.110 20:03:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:08:20.110 20:03:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:08:20.110 20:03:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:20.110 20:03:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:08:20.110 20:03:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.110 20:03:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.110 20:03:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.110 20:03:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:20.110 20:03:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:08:20.110 20:03:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.110 20:03:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.110 20:03:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.110 20:03:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:20.110 20:03:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:08:20.110 20:03:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.110 20:03:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.110 20:03:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.110 20:03:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:20.110 20:03:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:08:20.110 20:03:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.110 20:03:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.110 20:03:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.110 20:03:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:08:20.110 20:03:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:20.110 20:03:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:08:20.110 20:03:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:20.110 20:03:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:08:20.110 20:03:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:20.110 20:03:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:08:20.110 20:03:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:20.110 20:03:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:20.110 20:03:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.110 20:03:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.110 [2024-12-08 20:03:52.006357] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:20.110 [2024-12-08 20:03:52.008307] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:20.110 [2024-12-08 20:03:52.008366] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:08:20.110 [2024-12-08 20:03:52.008432] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:20.110 [2024-12-08 20:03:52.008485] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:20.110 [2024-12-08 20:03:52.008503] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:08:20.110 [2024-12-08 20:03:52.008519] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:20.110 [2024-12-08 20:03:52.008531] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:08:20.110 request: 00:08:20.110 { 00:08:20.110 "name": "raid_bdev1", 00:08:20.110 "raid_level": "raid0", 00:08:20.110 "base_bdevs": [ 00:08:20.110 "malloc1", 00:08:20.110 "malloc2", 00:08:20.110 "malloc3" 00:08:20.110 ], 00:08:20.110 "strip_size_kb": 64, 00:08:20.110 "superblock": false, 00:08:20.110 "method": "bdev_raid_create", 00:08:20.110 "req_id": 1 00:08:20.110 } 00:08:20.110 Got JSON-RPC error response 00:08:20.110 response: 00:08:20.110 { 00:08:20.110 "code": -17, 00:08:20.110 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:20.110 } 00:08:20.110 20:03:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:20.110 20:03:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:08:20.110 20:03:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:20.110 20:03:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:20.110 20:03:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:20.110 20:03:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:20.111 20:03:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.111 20:03:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.111 20:03:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:08:20.111 20:03:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.111 20:03:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:08:20.111 20:03:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:08:20.111 20:03:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:20.111 20:03:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.111 20:03:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.111 [2024-12-08 20:03:52.074144] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:20.111 [2024-12-08 20:03:52.074242] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:20.111 [2024-12-08 20:03:52.074297] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:08:20.111 [2024-12-08 20:03:52.074326] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:20.111 [2024-12-08 20:03:52.076531] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:20.111 [2024-12-08 20:03:52.076613] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:20.111 [2024-12-08 20:03:52.076741] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:20.111 [2024-12-08 20:03:52.076849] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:20.111 pt1 00:08:20.111 20:03:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.111 20:03:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:08:20.111 20:03:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:20.111 20:03:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:20.111 20:03:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:20.111 20:03:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:20.111 20:03:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:20.111 20:03:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:20.111 20:03:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:20.111 20:03:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:20.111 20:03:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:20.371 20:03:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:20.371 20:03:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:20.371 20:03:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.371 20:03:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.371 20:03:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.371 20:03:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:20.371 "name": "raid_bdev1", 00:08:20.371 "uuid": "955ad663-b041-46b3-a92a-3322ee01bd4f", 00:08:20.371 "strip_size_kb": 64, 00:08:20.371 "state": "configuring", 00:08:20.371 "raid_level": "raid0", 00:08:20.371 "superblock": true, 00:08:20.371 "num_base_bdevs": 3, 00:08:20.371 "num_base_bdevs_discovered": 1, 00:08:20.371 "num_base_bdevs_operational": 3, 00:08:20.371 "base_bdevs_list": [ 00:08:20.371 { 00:08:20.371 "name": "pt1", 00:08:20.371 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:20.371 "is_configured": true, 00:08:20.371 "data_offset": 2048, 00:08:20.371 "data_size": 63488 00:08:20.371 }, 00:08:20.371 { 00:08:20.371 "name": null, 00:08:20.371 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:20.371 "is_configured": false, 00:08:20.371 "data_offset": 2048, 00:08:20.371 "data_size": 63488 00:08:20.371 }, 00:08:20.371 { 00:08:20.371 "name": null, 00:08:20.371 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:20.371 "is_configured": false, 00:08:20.371 "data_offset": 2048, 00:08:20.371 "data_size": 63488 00:08:20.371 } 00:08:20.371 ] 00:08:20.371 }' 00:08:20.371 20:03:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:20.371 20:03:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.631 20:03:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:08:20.631 20:03:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:20.631 20:03:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.631 20:03:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.631 [2024-12-08 20:03:52.457514] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:20.631 [2024-12-08 20:03:52.457587] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:20.631 [2024-12-08 20:03:52.457617] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:08:20.631 [2024-12-08 20:03:52.457627] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:20.631 [2024-12-08 20:03:52.458090] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:20.631 [2024-12-08 20:03:52.458112] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:20.631 [2024-12-08 20:03:52.458203] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:20.631 [2024-12-08 20:03:52.458270] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:20.631 pt2 00:08:20.631 20:03:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.631 20:03:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:08:20.631 20:03:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.631 20:03:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.631 [2024-12-08 20:03:52.469515] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:08:20.631 20:03:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.631 20:03:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:08:20.631 20:03:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:20.631 20:03:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:20.631 20:03:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:20.631 20:03:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:20.631 20:03:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:20.631 20:03:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:20.631 20:03:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:20.631 20:03:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:20.631 20:03:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:20.631 20:03:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:20.631 20:03:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:20.631 20:03:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.631 20:03:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.631 20:03:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.631 20:03:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:20.631 "name": "raid_bdev1", 00:08:20.631 "uuid": "955ad663-b041-46b3-a92a-3322ee01bd4f", 00:08:20.631 "strip_size_kb": 64, 00:08:20.631 "state": "configuring", 00:08:20.631 "raid_level": "raid0", 00:08:20.631 "superblock": true, 00:08:20.631 "num_base_bdevs": 3, 00:08:20.631 "num_base_bdevs_discovered": 1, 00:08:20.631 "num_base_bdevs_operational": 3, 00:08:20.631 "base_bdevs_list": [ 00:08:20.631 { 00:08:20.631 "name": "pt1", 00:08:20.631 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:20.631 "is_configured": true, 00:08:20.631 "data_offset": 2048, 00:08:20.631 "data_size": 63488 00:08:20.631 }, 00:08:20.631 { 00:08:20.631 "name": null, 00:08:20.631 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:20.631 "is_configured": false, 00:08:20.631 "data_offset": 0, 00:08:20.631 "data_size": 63488 00:08:20.631 }, 00:08:20.631 { 00:08:20.631 "name": null, 00:08:20.631 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:20.631 "is_configured": false, 00:08:20.631 "data_offset": 2048, 00:08:20.631 "data_size": 63488 00:08:20.631 } 00:08:20.631 ] 00:08:20.631 }' 00:08:20.631 20:03:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:20.631 20:03:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.202 20:03:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:08:21.202 20:03:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:21.202 20:03:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:21.202 20:03:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.202 20:03:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.202 [2024-12-08 20:03:52.900775] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:21.202 [2024-12-08 20:03:52.900887] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:21.202 [2024-12-08 20:03:52.900922] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:08:21.202 [2024-12-08 20:03:52.900961] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:21.202 [2024-12-08 20:03:52.901547] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:21.202 [2024-12-08 20:03:52.901617] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:21.202 [2024-12-08 20:03:52.901758] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:21.202 [2024-12-08 20:03:52.901813] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:21.202 pt2 00:08:21.202 20:03:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.202 20:03:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:21.202 20:03:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:21.203 20:03:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:21.203 20:03:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.203 20:03:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.203 [2024-12-08 20:03:52.912730] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:21.203 [2024-12-08 20:03:52.912813] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:21.203 [2024-12-08 20:03:52.912830] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:08:21.203 [2024-12-08 20:03:52.912839] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:21.203 [2024-12-08 20:03:52.913269] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:21.203 [2024-12-08 20:03:52.913292] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:21.203 [2024-12-08 20:03:52.913348] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:08:21.203 [2024-12-08 20:03:52.913367] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:21.203 [2024-12-08 20:03:52.913492] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:21.203 [2024-12-08 20:03:52.913502] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:21.203 [2024-12-08 20:03:52.913732] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:08:21.203 [2024-12-08 20:03:52.913866] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:21.203 [2024-12-08 20:03:52.913874] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:21.203 [2024-12-08 20:03:52.914051] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:21.203 pt3 00:08:21.203 20:03:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.203 20:03:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:21.203 20:03:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:21.203 20:03:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:21.203 20:03:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:21.203 20:03:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:21.203 20:03:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:21.203 20:03:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:21.203 20:03:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:21.203 20:03:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:21.203 20:03:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:21.203 20:03:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:21.203 20:03:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:21.203 20:03:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:21.203 20:03:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.203 20:03:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.203 20:03:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:21.203 20:03:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.203 20:03:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:21.203 "name": "raid_bdev1", 00:08:21.203 "uuid": "955ad663-b041-46b3-a92a-3322ee01bd4f", 00:08:21.203 "strip_size_kb": 64, 00:08:21.203 "state": "online", 00:08:21.203 "raid_level": "raid0", 00:08:21.203 "superblock": true, 00:08:21.203 "num_base_bdevs": 3, 00:08:21.203 "num_base_bdevs_discovered": 3, 00:08:21.203 "num_base_bdevs_operational": 3, 00:08:21.203 "base_bdevs_list": [ 00:08:21.203 { 00:08:21.203 "name": "pt1", 00:08:21.203 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:21.203 "is_configured": true, 00:08:21.203 "data_offset": 2048, 00:08:21.203 "data_size": 63488 00:08:21.203 }, 00:08:21.203 { 00:08:21.203 "name": "pt2", 00:08:21.203 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:21.203 "is_configured": true, 00:08:21.203 "data_offset": 2048, 00:08:21.203 "data_size": 63488 00:08:21.203 }, 00:08:21.203 { 00:08:21.203 "name": "pt3", 00:08:21.203 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:21.203 "is_configured": true, 00:08:21.203 "data_offset": 2048, 00:08:21.203 "data_size": 63488 00:08:21.203 } 00:08:21.203 ] 00:08:21.203 }' 00:08:21.203 20:03:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:21.203 20:03:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.462 20:03:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:08:21.462 20:03:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:21.462 20:03:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:21.462 20:03:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:21.462 20:03:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:21.462 20:03:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:21.462 20:03:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:21.462 20:03:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.462 20:03:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:21.462 20:03:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.462 [2024-12-08 20:03:53.356383] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:21.462 20:03:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.462 20:03:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:21.462 "name": "raid_bdev1", 00:08:21.462 "aliases": [ 00:08:21.462 "955ad663-b041-46b3-a92a-3322ee01bd4f" 00:08:21.462 ], 00:08:21.462 "product_name": "Raid Volume", 00:08:21.462 "block_size": 512, 00:08:21.462 "num_blocks": 190464, 00:08:21.462 "uuid": "955ad663-b041-46b3-a92a-3322ee01bd4f", 00:08:21.462 "assigned_rate_limits": { 00:08:21.462 "rw_ios_per_sec": 0, 00:08:21.462 "rw_mbytes_per_sec": 0, 00:08:21.462 "r_mbytes_per_sec": 0, 00:08:21.462 "w_mbytes_per_sec": 0 00:08:21.462 }, 00:08:21.462 "claimed": false, 00:08:21.462 "zoned": false, 00:08:21.462 "supported_io_types": { 00:08:21.462 "read": true, 00:08:21.462 "write": true, 00:08:21.462 "unmap": true, 00:08:21.462 "flush": true, 00:08:21.462 "reset": true, 00:08:21.462 "nvme_admin": false, 00:08:21.462 "nvme_io": false, 00:08:21.462 "nvme_io_md": false, 00:08:21.462 "write_zeroes": true, 00:08:21.462 "zcopy": false, 00:08:21.462 "get_zone_info": false, 00:08:21.462 "zone_management": false, 00:08:21.462 "zone_append": false, 00:08:21.462 "compare": false, 00:08:21.462 "compare_and_write": false, 00:08:21.462 "abort": false, 00:08:21.462 "seek_hole": false, 00:08:21.462 "seek_data": false, 00:08:21.462 "copy": false, 00:08:21.462 "nvme_iov_md": false 00:08:21.462 }, 00:08:21.462 "memory_domains": [ 00:08:21.462 { 00:08:21.462 "dma_device_id": "system", 00:08:21.462 "dma_device_type": 1 00:08:21.462 }, 00:08:21.462 { 00:08:21.462 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:21.462 "dma_device_type": 2 00:08:21.462 }, 00:08:21.462 { 00:08:21.462 "dma_device_id": "system", 00:08:21.462 "dma_device_type": 1 00:08:21.462 }, 00:08:21.462 { 00:08:21.462 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:21.462 "dma_device_type": 2 00:08:21.462 }, 00:08:21.462 { 00:08:21.462 "dma_device_id": "system", 00:08:21.463 "dma_device_type": 1 00:08:21.463 }, 00:08:21.463 { 00:08:21.463 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:21.463 "dma_device_type": 2 00:08:21.463 } 00:08:21.463 ], 00:08:21.463 "driver_specific": { 00:08:21.463 "raid": { 00:08:21.463 "uuid": "955ad663-b041-46b3-a92a-3322ee01bd4f", 00:08:21.463 "strip_size_kb": 64, 00:08:21.463 "state": "online", 00:08:21.463 "raid_level": "raid0", 00:08:21.463 "superblock": true, 00:08:21.463 "num_base_bdevs": 3, 00:08:21.463 "num_base_bdevs_discovered": 3, 00:08:21.463 "num_base_bdevs_operational": 3, 00:08:21.463 "base_bdevs_list": [ 00:08:21.463 { 00:08:21.463 "name": "pt1", 00:08:21.463 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:21.463 "is_configured": true, 00:08:21.463 "data_offset": 2048, 00:08:21.463 "data_size": 63488 00:08:21.463 }, 00:08:21.463 { 00:08:21.463 "name": "pt2", 00:08:21.463 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:21.463 "is_configured": true, 00:08:21.463 "data_offset": 2048, 00:08:21.463 "data_size": 63488 00:08:21.463 }, 00:08:21.463 { 00:08:21.463 "name": "pt3", 00:08:21.463 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:21.463 "is_configured": true, 00:08:21.463 "data_offset": 2048, 00:08:21.463 "data_size": 63488 00:08:21.463 } 00:08:21.463 ] 00:08:21.463 } 00:08:21.463 } 00:08:21.463 }' 00:08:21.463 20:03:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:21.463 20:03:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:21.463 pt2 00:08:21.463 pt3' 00:08:21.463 20:03:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:21.721 20:03:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:21.721 20:03:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:21.721 20:03:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:21.721 20:03:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:21.721 20:03:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.721 20:03:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.721 20:03:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.721 20:03:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:21.721 20:03:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:21.721 20:03:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:21.721 20:03:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:21.721 20:03:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:21.721 20:03:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.721 20:03:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.721 20:03:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.721 20:03:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:21.721 20:03:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:21.721 20:03:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:21.721 20:03:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:21.721 20:03:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:08:21.721 20:03:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.721 20:03:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.721 20:03:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.721 20:03:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:21.721 20:03:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:21.721 20:03:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:21.721 20:03:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:08:21.721 20:03:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.721 20:03:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.721 [2024-12-08 20:03:53.623850] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:21.721 20:03:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.721 20:03:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 955ad663-b041-46b3-a92a-3322ee01bd4f '!=' 955ad663-b041-46b3-a92a-3322ee01bd4f ']' 00:08:21.721 20:03:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:08:21.721 20:03:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:21.721 20:03:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:21.721 20:03:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 64903 00:08:21.721 20:03:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 64903 ']' 00:08:21.721 20:03:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 64903 00:08:21.721 20:03:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:08:21.721 20:03:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:21.721 20:03:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64903 00:08:21.981 killing process with pid 64903 00:08:21.981 20:03:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:21.981 20:03:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:21.981 20:03:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64903' 00:08:21.981 20:03:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 64903 00:08:21.981 [2024-12-08 20:03:53.701120] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:21.981 [2024-12-08 20:03:53.701215] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:21.981 [2024-12-08 20:03:53.701279] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:21.981 [2024-12-08 20:03:53.701291] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:21.981 20:03:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 64903 00:08:22.241 [2024-12-08 20:03:53.993443] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:23.182 20:03:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:08:23.182 00:08:23.182 real 0m5.076s 00:08:23.182 user 0m7.266s 00:08:23.182 sys 0m0.864s 00:08:23.182 20:03:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:23.182 ************************************ 00:08:23.182 END TEST raid_superblock_test 00:08:23.182 ************************************ 00:08:23.182 20:03:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.182 20:03:55 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:08:23.182 20:03:55 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:23.182 20:03:55 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:23.182 20:03:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:23.182 ************************************ 00:08:23.182 START TEST raid_read_error_test 00:08:23.182 ************************************ 00:08:23.182 20:03:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 3 read 00:08:23.182 20:03:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:08:23.182 20:03:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:08:23.182 20:03:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:08:23.182 20:03:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:23.182 20:03:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:23.182 20:03:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:23.182 20:03:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:23.182 20:03:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:23.182 20:03:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:23.182 20:03:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:23.182 20:03:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:23.182 20:03:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:08:23.182 20:03:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:23.182 20:03:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:23.182 20:03:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:23.182 20:03:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:23.182 20:03:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:23.182 20:03:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:23.182 20:03:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:23.182 20:03:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:23.182 20:03:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:23.182 20:03:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:08:23.182 20:03:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:23.182 20:03:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:23.182 20:03:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:23.182 20:03:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.QtTdW0szbH 00:08:23.182 20:03:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65156 00:08:23.182 20:03:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65156 00:08:23.182 20:03:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:23.182 20:03:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 65156 ']' 00:08:23.182 20:03:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:23.182 20:03:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:23.182 20:03:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:23.182 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:23.182 20:03:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:23.182 20:03:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.441 [2024-12-08 20:03:55.240235] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:08:23.441 [2024-12-08 20:03:55.240447] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65156 ] 00:08:23.441 [2024-12-08 20:03:55.410441] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:23.701 [2024-12-08 20:03:55.517463] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.960 [2024-12-08 20:03:55.712683] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:23.960 [2024-12-08 20:03:55.712791] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:24.220 20:03:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:24.220 20:03:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:24.220 20:03:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:24.221 20:03:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:24.221 20:03:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.221 20:03:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.221 BaseBdev1_malloc 00:08:24.221 20:03:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.221 20:03:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:24.221 20:03:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.221 20:03:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.221 true 00:08:24.221 20:03:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.221 20:03:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:24.221 20:03:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.221 20:03:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.221 [2024-12-08 20:03:56.114135] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:24.221 [2024-12-08 20:03:56.114241] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:24.221 [2024-12-08 20:03:56.114265] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:24.221 [2024-12-08 20:03:56.114276] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:24.221 [2024-12-08 20:03:56.116367] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:24.221 [2024-12-08 20:03:56.116419] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:24.221 BaseBdev1 00:08:24.221 20:03:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.221 20:03:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:24.221 20:03:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:24.221 20:03:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.221 20:03:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.221 BaseBdev2_malloc 00:08:24.221 20:03:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.221 20:03:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:24.221 20:03:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.221 20:03:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.221 true 00:08:24.221 20:03:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.221 20:03:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:24.221 20:03:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.221 20:03:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.221 [2024-12-08 20:03:56.179593] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:24.221 [2024-12-08 20:03:56.179643] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:24.221 [2024-12-08 20:03:56.179676] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:24.221 [2024-12-08 20:03:56.179685] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:24.221 [2024-12-08 20:03:56.181670] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:24.221 [2024-12-08 20:03:56.181787] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:24.221 BaseBdev2 00:08:24.221 20:03:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.221 20:03:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:24.221 20:03:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:08:24.221 20:03:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.221 20:03:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.481 BaseBdev3_malloc 00:08:24.481 20:03:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.481 20:03:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:08:24.481 20:03:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.481 20:03:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.481 true 00:08:24.481 20:03:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.481 20:03:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:08:24.481 20:03:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.481 20:03:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.481 [2024-12-08 20:03:56.260498] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:08:24.481 [2024-12-08 20:03:56.260549] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:24.481 [2024-12-08 20:03:56.260565] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:08:24.481 [2024-12-08 20:03:56.260575] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:24.481 [2024-12-08 20:03:56.262812] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:24.481 [2024-12-08 20:03:56.262852] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:08:24.481 BaseBdev3 00:08:24.481 20:03:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.481 20:03:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:08:24.481 20:03:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.481 20:03:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.481 [2024-12-08 20:03:56.272552] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:24.481 [2024-12-08 20:03:56.274345] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:24.481 [2024-12-08 20:03:56.274413] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:24.481 [2024-12-08 20:03:56.274594] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:24.481 [2024-12-08 20:03:56.274608] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:24.481 [2024-12-08 20:03:56.274839] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:08:24.481 [2024-12-08 20:03:56.275023] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:24.481 [2024-12-08 20:03:56.275038] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:08:24.481 [2024-12-08 20:03:56.275188] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:24.481 20:03:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.481 20:03:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:24.481 20:03:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:24.481 20:03:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:24.481 20:03:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:24.481 20:03:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:24.481 20:03:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:24.481 20:03:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:24.481 20:03:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:24.481 20:03:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:24.481 20:03:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:24.481 20:03:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:24.481 20:03:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:24.481 20:03:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.481 20:03:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.481 20:03:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.481 20:03:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:24.481 "name": "raid_bdev1", 00:08:24.481 "uuid": "bfd3815e-c0c2-4300-be39-00f48bf3f9f4", 00:08:24.481 "strip_size_kb": 64, 00:08:24.481 "state": "online", 00:08:24.481 "raid_level": "raid0", 00:08:24.481 "superblock": true, 00:08:24.481 "num_base_bdevs": 3, 00:08:24.481 "num_base_bdevs_discovered": 3, 00:08:24.481 "num_base_bdevs_operational": 3, 00:08:24.481 "base_bdevs_list": [ 00:08:24.481 { 00:08:24.481 "name": "BaseBdev1", 00:08:24.481 "uuid": "4cf22bfb-e30e-56c4-982d-c8bd31e17886", 00:08:24.481 "is_configured": true, 00:08:24.481 "data_offset": 2048, 00:08:24.481 "data_size": 63488 00:08:24.481 }, 00:08:24.481 { 00:08:24.481 "name": "BaseBdev2", 00:08:24.481 "uuid": "0067cc74-671e-53a1-88cb-8367489bd87a", 00:08:24.481 "is_configured": true, 00:08:24.481 "data_offset": 2048, 00:08:24.481 "data_size": 63488 00:08:24.481 }, 00:08:24.481 { 00:08:24.481 "name": "BaseBdev3", 00:08:24.481 "uuid": "8e56a5f5-1d42-5c71-afa0-acd41f83ae49", 00:08:24.481 "is_configured": true, 00:08:24.481 "data_offset": 2048, 00:08:24.481 "data_size": 63488 00:08:24.481 } 00:08:24.481 ] 00:08:24.481 }' 00:08:24.481 20:03:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:24.481 20:03:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.757 20:03:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:24.757 20:03:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:25.016 [2024-12-08 20:03:56.792977] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:08:25.957 20:03:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:25.957 20:03:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.957 20:03:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.957 20:03:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.957 20:03:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:25.957 20:03:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:08:25.957 20:03:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:08:25.957 20:03:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:25.957 20:03:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:25.957 20:03:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:25.957 20:03:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:25.957 20:03:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:25.958 20:03:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:25.958 20:03:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:25.958 20:03:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:25.958 20:03:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:25.958 20:03:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:25.958 20:03:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:25.958 20:03:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.958 20:03:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:25.958 20:03:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.958 20:03:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.958 20:03:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:25.958 "name": "raid_bdev1", 00:08:25.958 "uuid": "bfd3815e-c0c2-4300-be39-00f48bf3f9f4", 00:08:25.958 "strip_size_kb": 64, 00:08:25.958 "state": "online", 00:08:25.958 "raid_level": "raid0", 00:08:25.958 "superblock": true, 00:08:25.958 "num_base_bdevs": 3, 00:08:25.958 "num_base_bdevs_discovered": 3, 00:08:25.958 "num_base_bdevs_operational": 3, 00:08:25.958 "base_bdevs_list": [ 00:08:25.958 { 00:08:25.958 "name": "BaseBdev1", 00:08:25.958 "uuid": "4cf22bfb-e30e-56c4-982d-c8bd31e17886", 00:08:25.958 "is_configured": true, 00:08:25.958 "data_offset": 2048, 00:08:25.958 "data_size": 63488 00:08:25.958 }, 00:08:25.958 { 00:08:25.958 "name": "BaseBdev2", 00:08:25.958 "uuid": "0067cc74-671e-53a1-88cb-8367489bd87a", 00:08:25.958 "is_configured": true, 00:08:25.958 "data_offset": 2048, 00:08:25.958 "data_size": 63488 00:08:25.958 }, 00:08:25.958 { 00:08:25.958 "name": "BaseBdev3", 00:08:25.958 "uuid": "8e56a5f5-1d42-5c71-afa0-acd41f83ae49", 00:08:25.958 "is_configured": true, 00:08:25.958 "data_offset": 2048, 00:08:25.958 "data_size": 63488 00:08:25.958 } 00:08:25.958 ] 00:08:25.958 }' 00:08:25.958 20:03:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:25.958 20:03:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.218 20:03:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:26.218 20:03:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.218 20:03:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.218 [2024-12-08 20:03:58.185049] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:26.218 [2024-12-08 20:03:58.185143] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:26.218 [2024-12-08 20:03:58.187886] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:26.218 [2024-12-08 20:03:58.187928] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:26.218 [2024-12-08 20:03:58.187974] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:26.218 [2024-12-08 20:03:58.187983] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:08:26.218 20:03:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.218 { 00:08:26.218 "results": [ 00:08:26.218 { 00:08:26.218 "job": "raid_bdev1", 00:08:26.218 "core_mask": "0x1", 00:08:26.218 "workload": "randrw", 00:08:26.218 "percentage": 50, 00:08:26.218 "status": "finished", 00:08:26.218 "queue_depth": 1, 00:08:26.218 "io_size": 131072, 00:08:26.218 "runtime": 1.393137, 00:08:26.218 "iops": 15628.039453406234, 00:08:26.218 "mibps": 1953.5049316757793, 00:08:26.218 "io_failed": 1, 00:08:26.218 "io_timeout": 0, 00:08:26.218 "avg_latency_us": 88.8622311556499, 00:08:26.218 "min_latency_us": 20.90480349344978, 00:08:26.218 "max_latency_us": 1359.3711790393013 00:08:26.218 } 00:08:26.218 ], 00:08:26.218 "core_count": 1 00:08:26.218 } 00:08:26.218 20:03:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65156 00:08:26.218 20:03:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 65156 ']' 00:08:26.218 20:03:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 65156 00:08:26.218 20:03:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:08:26.479 20:03:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:26.479 20:03:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65156 00:08:26.479 20:03:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:26.479 20:03:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:26.479 20:03:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65156' 00:08:26.479 killing process with pid 65156 00:08:26.479 20:03:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 65156 00:08:26.479 [2024-12-08 20:03:58.232556] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:26.479 20:03:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 65156 00:08:26.739 [2024-12-08 20:03:58.457511] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:27.679 20:03:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.QtTdW0szbH 00:08:27.679 20:03:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:27.679 20:03:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:27.679 20:03:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:08:27.679 20:03:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:08:27.679 20:03:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:27.679 20:03:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:27.679 20:03:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:08:27.679 00:08:27.679 real 0m4.458s 00:08:27.679 user 0m5.266s 00:08:27.679 sys 0m0.572s 00:08:27.679 20:03:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:27.679 20:03:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.679 ************************************ 00:08:27.679 END TEST raid_read_error_test 00:08:27.679 ************************************ 00:08:27.679 20:03:59 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:08:27.679 20:03:59 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:27.679 20:03:59 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:27.679 20:03:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:27.938 ************************************ 00:08:27.939 START TEST raid_write_error_test 00:08:27.939 ************************************ 00:08:27.939 20:03:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 3 write 00:08:27.939 20:03:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:08:27.939 20:03:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:08:27.939 20:03:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:08:27.939 20:03:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:27.939 20:03:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:27.939 20:03:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:27.939 20:03:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:27.939 20:03:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:27.939 20:03:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:27.939 20:03:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:27.939 20:03:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:27.939 20:03:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:08:27.939 20:03:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:27.939 20:03:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:27.939 20:03:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:27.939 20:03:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:27.939 20:03:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:27.939 20:03:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:27.939 20:03:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:27.939 20:03:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:27.939 20:03:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:27.939 20:03:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:08:27.939 20:03:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:27.939 20:03:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:27.939 20:03:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:27.939 20:03:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.e7HcaxVUMh 00:08:27.939 20:03:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65296 00:08:27.939 20:03:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:27.939 20:03:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65296 00:08:27.939 20:03:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 65296 ']' 00:08:27.939 20:03:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:27.939 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:27.939 20:03:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:27.939 20:03:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:27.939 20:03:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:27.939 20:03:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.939 [2024-12-08 20:03:59.771462] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:08:27.939 [2024-12-08 20:03:59.771579] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65296 ] 00:08:28.199 [2024-12-08 20:03:59.941997] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:28.199 [2024-12-08 20:04:00.050785] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:28.458 [2024-12-08 20:04:00.246531] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:28.458 [2024-12-08 20:04:00.246561] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:28.735 20:04:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:28.735 20:04:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:28.735 20:04:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:28.735 20:04:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:28.735 20:04:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.735 20:04:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.735 BaseBdev1_malloc 00:08:28.735 20:04:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.735 20:04:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:28.735 20:04:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.735 20:04:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.735 true 00:08:28.735 20:04:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.735 20:04:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:28.735 20:04:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.735 20:04:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.735 [2024-12-08 20:04:00.647327] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:28.735 [2024-12-08 20:04:00.647433] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:28.735 [2024-12-08 20:04:00.647461] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:28.735 [2024-12-08 20:04:00.647474] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:28.735 [2024-12-08 20:04:00.649659] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:28.735 [2024-12-08 20:04:00.649703] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:28.735 BaseBdev1 00:08:28.735 20:04:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.735 20:04:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:28.735 20:04:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:28.735 20:04:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.735 20:04:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.735 BaseBdev2_malloc 00:08:28.735 20:04:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.735 20:04:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:28.735 20:04:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.735 20:04:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.022 true 00:08:29.022 20:04:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.022 20:04:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:29.022 20:04:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.022 20:04:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.022 [2024-12-08 20:04:00.713724] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:29.022 [2024-12-08 20:04:00.713789] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:29.022 [2024-12-08 20:04:00.713809] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:29.022 [2024-12-08 20:04:00.713819] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:29.022 [2024-12-08 20:04:00.716200] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:29.022 [2024-12-08 20:04:00.716241] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:29.022 BaseBdev2 00:08:29.022 20:04:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.022 20:04:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:29.022 20:04:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:08:29.022 20:04:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.022 20:04:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.022 BaseBdev3_malloc 00:08:29.022 20:04:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.022 20:04:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:08:29.022 20:04:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.022 20:04:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.022 true 00:08:29.022 20:04:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.022 20:04:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:08:29.022 20:04:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.022 20:04:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.022 [2024-12-08 20:04:00.794187] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:08:29.022 [2024-12-08 20:04:00.794242] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:29.022 [2024-12-08 20:04:00.794259] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:08:29.022 [2024-12-08 20:04:00.794270] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:29.022 [2024-12-08 20:04:00.796516] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:29.022 [2024-12-08 20:04:00.796609] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:08:29.022 BaseBdev3 00:08:29.022 20:04:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.022 20:04:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:08:29.022 20:04:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.022 20:04:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.022 [2024-12-08 20:04:00.806243] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:29.022 [2024-12-08 20:04:00.808061] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:29.022 [2024-12-08 20:04:00.808200] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:29.022 [2024-12-08 20:04:00.808417] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:29.022 [2024-12-08 20:04:00.808432] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:29.022 [2024-12-08 20:04:00.808660] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:08:29.022 [2024-12-08 20:04:00.808810] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:29.022 [2024-12-08 20:04:00.808823] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:08:29.022 [2024-12-08 20:04:00.808975] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:29.022 20:04:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.022 20:04:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:29.022 20:04:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:29.022 20:04:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:29.022 20:04:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:29.022 20:04:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:29.022 20:04:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:29.022 20:04:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:29.022 20:04:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:29.022 20:04:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:29.022 20:04:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:29.022 20:04:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:29.022 20:04:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:29.022 20:04:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.022 20:04:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.022 20:04:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.022 20:04:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:29.022 "name": "raid_bdev1", 00:08:29.022 "uuid": "df5bea99-9d22-469d-9372-d160557096ee", 00:08:29.022 "strip_size_kb": 64, 00:08:29.022 "state": "online", 00:08:29.022 "raid_level": "raid0", 00:08:29.022 "superblock": true, 00:08:29.022 "num_base_bdevs": 3, 00:08:29.022 "num_base_bdevs_discovered": 3, 00:08:29.022 "num_base_bdevs_operational": 3, 00:08:29.022 "base_bdevs_list": [ 00:08:29.022 { 00:08:29.022 "name": "BaseBdev1", 00:08:29.022 "uuid": "b270e444-f46c-5805-8b75-73dfd364b040", 00:08:29.022 "is_configured": true, 00:08:29.023 "data_offset": 2048, 00:08:29.023 "data_size": 63488 00:08:29.023 }, 00:08:29.023 { 00:08:29.023 "name": "BaseBdev2", 00:08:29.023 "uuid": "a8f259d0-21e9-5443-9ae9-ac3180edbbc6", 00:08:29.023 "is_configured": true, 00:08:29.023 "data_offset": 2048, 00:08:29.023 "data_size": 63488 00:08:29.023 }, 00:08:29.023 { 00:08:29.023 "name": "BaseBdev3", 00:08:29.023 "uuid": "dbce25be-388a-5042-aeff-6a52cd09d32b", 00:08:29.023 "is_configured": true, 00:08:29.023 "data_offset": 2048, 00:08:29.023 "data_size": 63488 00:08:29.023 } 00:08:29.023 ] 00:08:29.023 }' 00:08:29.023 20:04:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:29.023 20:04:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.282 20:04:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:29.282 20:04:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:29.542 [2024-12-08 20:04:01.346685] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:08:30.481 20:04:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:30.481 20:04:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.481 20:04:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.481 20:04:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.481 20:04:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:30.481 20:04:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:08:30.481 20:04:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:08:30.481 20:04:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:30.481 20:04:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:30.481 20:04:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:30.482 20:04:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:30.482 20:04:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:30.482 20:04:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:30.482 20:04:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:30.482 20:04:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:30.482 20:04:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:30.482 20:04:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:30.482 20:04:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:30.482 20:04:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:30.482 20:04:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.482 20:04:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.482 20:04:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.482 20:04:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:30.482 "name": "raid_bdev1", 00:08:30.482 "uuid": "df5bea99-9d22-469d-9372-d160557096ee", 00:08:30.482 "strip_size_kb": 64, 00:08:30.482 "state": "online", 00:08:30.482 "raid_level": "raid0", 00:08:30.482 "superblock": true, 00:08:30.482 "num_base_bdevs": 3, 00:08:30.482 "num_base_bdevs_discovered": 3, 00:08:30.482 "num_base_bdevs_operational": 3, 00:08:30.482 "base_bdevs_list": [ 00:08:30.482 { 00:08:30.482 "name": "BaseBdev1", 00:08:30.482 "uuid": "b270e444-f46c-5805-8b75-73dfd364b040", 00:08:30.482 "is_configured": true, 00:08:30.482 "data_offset": 2048, 00:08:30.482 "data_size": 63488 00:08:30.482 }, 00:08:30.482 { 00:08:30.482 "name": "BaseBdev2", 00:08:30.482 "uuid": "a8f259d0-21e9-5443-9ae9-ac3180edbbc6", 00:08:30.482 "is_configured": true, 00:08:30.482 "data_offset": 2048, 00:08:30.482 "data_size": 63488 00:08:30.482 }, 00:08:30.482 { 00:08:30.482 "name": "BaseBdev3", 00:08:30.482 "uuid": "dbce25be-388a-5042-aeff-6a52cd09d32b", 00:08:30.482 "is_configured": true, 00:08:30.482 "data_offset": 2048, 00:08:30.482 "data_size": 63488 00:08:30.482 } 00:08:30.482 ] 00:08:30.482 }' 00:08:30.482 20:04:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:30.482 20:04:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.742 20:04:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:30.742 20:04:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.742 20:04:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.742 [2024-12-08 20:04:02.672629] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:30.742 [2024-12-08 20:04:02.672715] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:30.742 [2024-12-08 20:04:02.675537] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:30.742 [2024-12-08 20:04:02.675578] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:30.742 [2024-12-08 20:04:02.675614] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:30.742 [2024-12-08 20:04:02.675623] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:08:30.742 20:04:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.742 20:04:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65296 00:08:30.742 20:04:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 65296 ']' 00:08:30.742 20:04:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 65296 00:08:30.742 { 00:08:30.742 "results": [ 00:08:30.742 { 00:08:30.742 "job": "raid_bdev1", 00:08:30.742 "core_mask": "0x1", 00:08:30.742 "workload": "randrw", 00:08:30.742 "percentage": 50, 00:08:30.742 "status": "finished", 00:08:30.742 "queue_depth": 1, 00:08:30.742 "io_size": 131072, 00:08:30.742 "runtime": 1.326853, 00:08:30.742 "iops": 15432.00339449811, 00:08:30.742 "mibps": 1929.0004243122637, 00:08:30.742 "io_failed": 1, 00:08:30.742 "io_timeout": 0, 00:08:30.742 "avg_latency_us": 90.00550426903504, 00:08:30.742 "min_latency_us": 19.116157205240174, 00:08:30.742 "max_latency_us": 1495.3082969432314 00:08:30.742 } 00:08:30.742 ], 00:08:30.742 "core_count": 1 00:08:30.742 } 00:08:30.742 20:04:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:08:30.742 20:04:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:30.742 20:04:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65296 00:08:30.742 20:04:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:30.742 20:04:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:30.742 20:04:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65296' 00:08:30.742 killing process with pid 65296 00:08:30.742 20:04:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 65296 00:08:30.742 20:04:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 65296 00:08:30.742 [2024-12-08 20:04:02.703581] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:31.002 [2024-12-08 20:04:02.928459] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:32.384 20:04:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:32.384 20:04:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.e7HcaxVUMh 00:08:32.385 20:04:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:32.385 20:04:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.75 00:08:32.385 20:04:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:08:32.385 20:04:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:32.385 20:04:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:32.385 20:04:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.75 != \0\.\0\0 ]] 00:08:32.385 00:08:32.385 real 0m4.432s 00:08:32.385 user 0m5.193s 00:08:32.385 sys 0m0.558s 00:08:32.385 20:04:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:32.385 20:04:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.385 ************************************ 00:08:32.385 END TEST raid_write_error_test 00:08:32.385 ************************************ 00:08:32.385 20:04:04 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:32.385 20:04:04 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:08:32.385 20:04:04 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:32.385 20:04:04 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:32.385 20:04:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:32.385 ************************************ 00:08:32.385 START TEST raid_state_function_test 00:08:32.385 ************************************ 00:08:32.385 20:04:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 3 false 00:08:32.385 20:04:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:08:32.385 20:04:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:32.385 20:04:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:32.385 20:04:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:32.385 20:04:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:32.385 20:04:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:32.385 20:04:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:32.385 20:04:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:32.385 20:04:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:32.385 20:04:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:32.385 20:04:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:32.385 20:04:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:32.385 20:04:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:32.385 20:04:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:32.385 20:04:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:32.385 20:04:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:32.385 20:04:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:32.385 20:04:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:32.385 20:04:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:32.385 20:04:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:32.385 20:04:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:32.385 20:04:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:08:32.385 20:04:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:32.385 20:04:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:32.385 20:04:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:32.385 20:04:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:32.385 20:04:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=65440 00:08:32.385 20:04:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:32.385 20:04:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 65440' 00:08:32.385 Process raid pid: 65440 00:08:32.385 20:04:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 65440 00:08:32.385 20:04:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 65440 ']' 00:08:32.385 20:04:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:32.385 20:04:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:32.385 20:04:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:32.385 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:32.385 20:04:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:32.385 20:04:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.385 [2024-12-08 20:04:04.263947] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:08:32.385 [2024-12-08 20:04:04.264146] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:32.645 [2024-12-08 20:04:04.441601] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:32.645 [2024-12-08 20:04:04.554627] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.914 [2024-12-08 20:04:04.757527] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:32.915 [2024-12-08 20:04:04.757649] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:33.174 20:04:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:33.174 20:04:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:08:33.174 20:04:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:33.174 20:04:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.174 20:04:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.174 [2024-12-08 20:04:05.093802] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:33.174 [2024-12-08 20:04:05.093858] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:33.174 [2024-12-08 20:04:05.093869] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:33.174 [2024-12-08 20:04:05.093879] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:33.174 [2024-12-08 20:04:05.093885] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:33.174 [2024-12-08 20:04:05.093894] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:33.174 20:04:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.174 20:04:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:33.174 20:04:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:33.174 20:04:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:33.174 20:04:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:33.174 20:04:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:33.174 20:04:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:33.174 20:04:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:33.174 20:04:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:33.174 20:04:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:33.174 20:04:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:33.174 20:04:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:33.174 20:04:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:33.174 20:04:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.174 20:04:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.174 20:04:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.433 20:04:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:33.433 "name": "Existed_Raid", 00:08:33.433 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:33.433 "strip_size_kb": 64, 00:08:33.433 "state": "configuring", 00:08:33.433 "raid_level": "concat", 00:08:33.433 "superblock": false, 00:08:33.433 "num_base_bdevs": 3, 00:08:33.433 "num_base_bdevs_discovered": 0, 00:08:33.433 "num_base_bdevs_operational": 3, 00:08:33.433 "base_bdevs_list": [ 00:08:33.433 { 00:08:33.433 "name": "BaseBdev1", 00:08:33.433 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:33.433 "is_configured": false, 00:08:33.433 "data_offset": 0, 00:08:33.433 "data_size": 0 00:08:33.433 }, 00:08:33.433 { 00:08:33.433 "name": "BaseBdev2", 00:08:33.433 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:33.433 "is_configured": false, 00:08:33.433 "data_offset": 0, 00:08:33.433 "data_size": 0 00:08:33.433 }, 00:08:33.433 { 00:08:33.433 "name": "BaseBdev3", 00:08:33.433 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:33.433 "is_configured": false, 00:08:33.433 "data_offset": 0, 00:08:33.433 "data_size": 0 00:08:33.433 } 00:08:33.433 ] 00:08:33.433 }' 00:08:33.433 20:04:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:33.433 20:04:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.692 20:04:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:33.692 20:04:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.692 20:04:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.692 [2024-12-08 20:04:05.545001] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:33.692 [2024-12-08 20:04:05.545090] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:33.692 20:04:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.692 20:04:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:33.692 20:04:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.692 20:04:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.692 [2024-12-08 20:04:05.552984] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:33.692 [2024-12-08 20:04:05.553078] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:33.692 [2024-12-08 20:04:05.553108] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:33.692 [2024-12-08 20:04:05.553131] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:33.692 [2024-12-08 20:04:05.553150] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:33.692 [2024-12-08 20:04:05.553171] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:33.692 20:04:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.692 20:04:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:33.692 20:04:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.692 20:04:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.692 [2024-12-08 20:04:05.594653] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:33.692 BaseBdev1 00:08:33.692 20:04:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.692 20:04:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:33.692 20:04:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:33.692 20:04:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:33.692 20:04:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:33.692 20:04:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:33.692 20:04:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:33.692 20:04:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:33.692 20:04:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.692 20:04:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.692 20:04:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.692 20:04:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:33.692 20:04:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.692 20:04:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.692 [ 00:08:33.692 { 00:08:33.692 "name": "BaseBdev1", 00:08:33.692 "aliases": [ 00:08:33.692 "2fc9bb2a-b239-4e5e-b5c3-235e70b49f01" 00:08:33.692 ], 00:08:33.692 "product_name": "Malloc disk", 00:08:33.692 "block_size": 512, 00:08:33.692 "num_blocks": 65536, 00:08:33.692 "uuid": "2fc9bb2a-b239-4e5e-b5c3-235e70b49f01", 00:08:33.692 "assigned_rate_limits": { 00:08:33.692 "rw_ios_per_sec": 0, 00:08:33.692 "rw_mbytes_per_sec": 0, 00:08:33.692 "r_mbytes_per_sec": 0, 00:08:33.692 "w_mbytes_per_sec": 0 00:08:33.692 }, 00:08:33.692 "claimed": true, 00:08:33.692 "claim_type": "exclusive_write", 00:08:33.692 "zoned": false, 00:08:33.692 "supported_io_types": { 00:08:33.692 "read": true, 00:08:33.692 "write": true, 00:08:33.692 "unmap": true, 00:08:33.693 "flush": true, 00:08:33.693 "reset": true, 00:08:33.693 "nvme_admin": false, 00:08:33.693 "nvme_io": false, 00:08:33.693 "nvme_io_md": false, 00:08:33.693 "write_zeroes": true, 00:08:33.693 "zcopy": true, 00:08:33.693 "get_zone_info": false, 00:08:33.693 "zone_management": false, 00:08:33.693 "zone_append": false, 00:08:33.693 "compare": false, 00:08:33.693 "compare_and_write": false, 00:08:33.693 "abort": true, 00:08:33.693 "seek_hole": false, 00:08:33.693 "seek_data": false, 00:08:33.693 "copy": true, 00:08:33.693 "nvme_iov_md": false 00:08:33.693 }, 00:08:33.693 "memory_domains": [ 00:08:33.693 { 00:08:33.693 "dma_device_id": "system", 00:08:33.693 "dma_device_type": 1 00:08:33.693 }, 00:08:33.693 { 00:08:33.693 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:33.693 "dma_device_type": 2 00:08:33.693 } 00:08:33.693 ], 00:08:33.693 "driver_specific": {} 00:08:33.693 } 00:08:33.693 ] 00:08:33.693 20:04:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.693 20:04:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:33.693 20:04:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:33.693 20:04:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:33.693 20:04:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:33.693 20:04:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:33.693 20:04:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:33.693 20:04:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:33.693 20:04:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:33.693 20:04:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:33.693 20:04:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:33.693 20:04:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:33.693 20:04:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:33.693 20:04:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:33.693 20:04:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.693 20:04:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.693 20:04:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.952 20:04:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:33.952 "name": "Existed_Raid", 00:08:33.952 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:33.952 "strip_size_kb": 64, 00:08:33.952 "state": "configuring", 00:08:33.952 "raid_level": "concat", 00:08:33.952 "superblock": false, 00:08:33.952 "num_base_bdevs": 3, 00:08:33.952 "num_base_bdevs_discovered": 1, 00:08:33.952 "num_base_bdevs_operational": 3, 00:08:33.952 "base_bdevs_list": [ 00:08:33.952 { 00:08:33.952 "name": "BaseBdev1", 00:08:33.952 "uuid": "2fc9bb2a-b239-4e5e-b5c3-235e70b49f01", 00:08:33.952 "is_configured": true, 00:08:33.952 "data_offset": 0, 00:08:33.952 "data_size": 65536 00:08:33.952 }, 00:08:33.952 { 00:08:33.952 "name": "BaseBdev2", 00:08:33.952 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:33.952 "is_configured": false, 00:08:33.952 "data_offset": 0, 00:08:33.952 "data_size": 0 00:08:33.952 }, 00:08:33.952 { 00:08:33.952 "name": "BaseBdev3", 00:08:33.952 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:33.952 "is_configured": false, 00:08:33.952 "data_offset": 0, 00:08:33.952 "data_size": 0 00:08:33.952 } 00:08:33.952 ] 00:08:33.952 }' 00:08:33.952 20:04:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:33.952 20:04:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.211 20:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:34.211 20:04:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.211 20:04:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.211 [2024-12-08 20:04:06.010004] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:34.211 [2024-12-08 20:04:06.010060] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:34.211 20:04:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.211 20:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:34.211 20:04:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.211 20:04:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.211 [2024-12-08 20:04:06.022023] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:34.211 [2024-12-08 20:04:06.023946] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:34.211 [2024-12-08 20:04:06.024036] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:34.211 [2024-12-08 20:04:06.024067] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:34.211 [2024-12-08 20:04:06.024088] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:34.211 20:04:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.211 20:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:34.211 20:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:34.211 20:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:34.211 20:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:34.211 20:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:34.211 20:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:34.211 20:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:34.211 20:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:34.211 20:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:34.211 20:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:34.211 20:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:34.211 20:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:34.211 20:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:34.211 20:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:34.211 20:04:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.211 20:04:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.211 20:04:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.211 20:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:34.211 "name": "Existed_Raid", 00:08:34.212 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:34.212 "strip_size_kb": 64, 00:08:34.212 "state": "configuring", 00:08:34.212 "raid_level": "concat", 00:08:34.212 "superblock": false, 00:08:34.212 "num_base_bdevs": 3, 00:08:34.212 "num_base_bdevs_discovered": 1, 00:08:34.212 "num_base_bdevs_operational": 3, 00:08:34.212 "base_bdevs_list": [ 00:08:34.212 { 00:08:34.212 "name": "BaseBdev1", 00:08:34.212 "uuid": "2fc9bb2a-b239-4e5e-b5c3-235e70b49f01", 00:08:34.212 "is_configured": true, 00:08:34.212 "data_offset": 0, 00:08:34.212 "data_size": 65536 00:08:34.212 }, 00:08:34.212 { 00:08:34.212 "name": "BaseBdev2", 00:08:34.212 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:34.212 "is_configured": false, 00:08:34.212 "data_offset": 0, 00:08:34.212 "data_size": 0 00:08:34.212 }, 00:08:34.212 { 00:08:34.212 "name": "BaseBdev3", 00:08:34.212 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:34.212 "is_configured": false, 00:08:34.212 "data_offset": 0, 00:08:34.212 "data_size": 0 00:08:34.212 } 00:08:34.212 ] 00:08:34.212 }' 00:08:34.212 20:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:34.212 20:04:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.470 20:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:34.470 20:04:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.470 20:04:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.731 [2024-12-08 20:04:06.471517] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:34.731 BaseBdev2 00:08:34.731 20:04:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.731 20:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:34.731 20:04:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:34.731 20:04:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:34.731 20:04:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:34.731 20:04:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:34.731 20:04:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:34.731 20:04:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:34.731 20:04:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.731 20:04:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.731 20:04:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.731 20:04:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:34.731 20:04:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.731 20:04:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.731 [ 00:08:34.731 { 00:08:34.731 "name": "BaseBdev2", 00:08:34.731 "aliases": [ 00:08:34.731 "be635557-927c-4ed8-9090-11712f731b4c" 00:08:34.731 ], 00:08:34.731 "product_name": "Malloc disk", 00:08:34.731 "block_size": 512, 00:08:34.731 "num_blocks": 65536, 00:08:34.731 "uuid": "be635557-927c-4ed8-9090-11712f731b4c", 00:08:34.731 "assigned_rate_limits": { 00:08:34.731 "rw_ios_per_sec": 0, 00:08:34.731 "rw_mbytes_per_sec": 0, 00:08:34.731 "r_mbytes_per_sec": 0, 00:08:34.731 "w_mbytes_per_sec": 0 00:08:34.731 }, 00:08:34.731 "claimed": true, 00:08:34.731 "claim_type": "exclusive_write", 00:08:34.731 "zoned": false, 00:08:34.731 "supported_io_types": { 00:08:34.731 "read": true, 00:08:34.731 "write": true, 00:08:34.731 "unmap": true, 00:08:34.731 "flush": true, 00:08:34.731 "reset": true, 00:08:34.731 "nvme_admin": false, 00:08:34.731 "nvme_io": false, 00:08:34.731 "nvme_io_md": false, 00:08:34.731 "write_zeroes": true, 00:08:34.731 "zcopy": true, 00:08:34.731 "get_zone_info": false, 00:08:34.731 "zone_management": false, 00:08:34.731 "zone_append": false, 00:08:34.731 "compare": false, 00:08:34.731 "compare_and_write": false, 00:08:34.731 "abort": true, 00:08:34.731 "seek_hole": false, 00:08:34.731 "seek_data": false, 00:08:34.731 "copy": true, 00:08:34.731 "nvme_iov_md": false 00:08:34.731 }, 00:08:34.731 "memory_domains": [ 00:08:34.731 { 00:08:34.731 "dma_device_id": "system", 00:08:34.731 "dma_device_type": 1 00:08:34.731 }, 00:08:34.731 { 00:08:34.731 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:34.731 "dma_device_type": 2 00:08:34.731 } 00:08:34.731 ], 00:08:34.731 "driver_specific": {} 00:08:34.731 } 00:08:34.731 ] 00:08:34.731 20:04:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.731 20:04:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:34.731 20:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:34.731 20:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:34.731 20:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:34.731 20:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:34.731 20:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:34.731 20:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:34.731 20:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:34.731 20:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:34.731 20:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:34.731 20:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:34.731 20:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:34.731 20:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:34.731 20:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:34.731 20:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:34.731 20:04:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.731 20:04:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.731 20:04:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.731 20:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:34.731 "name": "Existed_Raid", 00:08:34.731 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:34.731 "strip_size_kb": 64, 00:08:34.731 "state": "configuring", 00:08:34.731 "raid_level": "concat", 00:08:34.731 "superblock": false, 00:08:34.731 "num_base_bdevs": 3, 00:08:34.731 "num_base_bdevs_discovered": 2, 00:08:34.731 "num_base_bdevs_operational": 3, 00:08:34.731 "base_bdevs_list": [ 00:08:34.731 { 00:08:34.731 "name": "BaseBdev1", 00:08:34.731 "uuid": "2fc9bb2a-b239-4e5e-b5c3-235e70b49f01", 00:08:34.731 "is_configured": true, 00:08:34.731 "data_offset": 0, 00:08:34.731 "data_size": 65536 00:08:34.731 }, 00:08:34.731 { 00:08:34.731 "name": "BaseBdev2", 00:08:34.731 "uuid": "be635557-927c-4ed8-9090-11712f731b4c", 00:08:34.731 "is_configured": true, 00:08:34.731 "data_offset": 0, 00:08:34.731 "data_size": 65536 00:08:34.731 }, 00:08:34.731 { 00:08:34.731 "name": "BaseBdev3", 00:08:34.731 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:34.731 "is_configured": false, 00:08:34.731 "data_offset": 0, 00:08:34.731 "data_size": 0 00:08:34.731 } 00:08:34.731 ] 00:08:34.731 }' 00:08:34.731 20:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:34.731 20:04:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.992 20:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:34.992 20:04:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.992 20:04:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.252 [2024-12-08 20:04:07.001037] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:35.252 [2024-12-08 20:04:07.001179] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:35.252 [2024-12-08 20:04:07.001210] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:35.252 [2024-12-08 20:04:07.001532] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:35.252 [2024-12-08 20:04:07.001770] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:35.252 [2024-12-08 20:04:07.001814] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:35.252 [2024-12-08 20:04:07.002151] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:35.252 BaseBdev3 00:08:35.253 20:04:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.253 20:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:35.253 20:04:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:35.253 20:04:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:35.253 20:04:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:35.253 20:04:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:35.253 20:04:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:35.253 20:04:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:35.253 20:04:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.253 20:04:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.253 20:04:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.253 20:04:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:35.253 20:04:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.253 20:04:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.253 [ 00:08:35.253 { 00:08:35.253 "name": "BaseBdev3", 00:08:35.253 "aliases": [ 00:08:35.253 "9c1986f4-2ca8-4cd1-9400-c767420c15c1" 00:08:35.253 ], 00:08:35.253 "product_name": "Malloc disk", 00:08:35.253 "block_size": 512, 00:08:35.253 "num_blocks": 65536, 00:08:35.253 "uuid": "9c1986f4-2ca8-4cd1-9400-c767420c15c1", 00:08:35.253 "assigned_rate_limits": { 00:08:35.253 "rw_ios_per_sec": 0, 00:08:35.253 "rw_mbytes_per_sec": 0, 00:08:35.253 "r_mbytes_per_sec": 0, 00:08:35.253 "w_mbytes_per_sec": 0 00:08:35.253 }, 00:08:35.253 "claimed": true, 00:08:35.253 "claim_type": "exclusive_write", 00:08:35.253 "zoned": false, 00:08:35.253 "supported_io_types": { 00:08:35.253 "read": true, 00:08:35.253 "write": true, 00:08:35.253 "unmap": true, 00:08:35.253 "flush": true, 00:08:35.253 "reset": true, 00:08:35.253 "nvme_admin": false, 00:08:35.253 "nvme_io": false, 00:08:35.253 "nvme_io_md": false, 00:08:35.253 "write_zeroes": true, 00:08:35.253 "zcopy": true, 00:08:35.253 "get_zone_info": false, 00:08:35.253 "zone_management": false, 00:08:35.253 "zone_append": false, 00:08:35.253 "compare": false, 00:08:35.253 "compare_and_write": false, 00:08:35.253 "abort": true, 00:08:35.253 "seek_hole": false, 00:08:35.253 "seek_data": false, 00:08:35.253 "copy": true, 00:08:35.253 "nvme_iov_md": false 00:08:35.253 }, 00:08:35.253 "memory_domains": [ 00:08:35.253 { 00:08:35.253 "dma_device_id": "system", 00:08:35.253 "dma_device_type": 1 00:08:35.253 }, 00:08:35.253 { 00:08:35.253 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:35.253 "dma_device_type": 2 00:08:35.253 } 00:08:35.253 ], 00:08:35.253 "driver_specific": {} 00:08:35.253 } 00:08:35.253 ] 00:08:35.253 20:04:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.253 20:04:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:35.253 20:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:35.253 20:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:35.253 20:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:08:35.253 20:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:35.253 20:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:35.253 20:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:35.253 20:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:35.253 20:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:35.253 20:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:35.253 20:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:35.253 20:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:35.253 20:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:35.253 20:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:35.253 20:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:35.253 20:04:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.253 20:04:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.253 20:04:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.253 20:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:35.253 "name": "Existed_Raid", 00:08:35.253 "uuid": "adc2aa53-9ec7-43fc-a566-9deed06c5bb2", 00:08:35.253 "strip_size_kb": 64, 00:08:35.253 "state": "online", 00:08:35.253 "raid_level": "concat", 00:08:35.253 "superblock": false, 00:08:35.253 "num_base_bdevs": 3, 00:08:35.253 "num_base_bdevs_discovered": 3, 00:08:35.253 "num_base_bdevs_operational": 3, 00:08:35.253 "base_bdevs_list": [ 00:08:35.253 { 00:08:35.253 "name": "BaseBdev1", 00:08:35.253 "uuid": "2fc9bb2a-b239-4e5e-b5c3-235e70b49f01", 00:08:35.253 "is_configured": true, 00:08:35.253 "data_offset": 0, 00:08:35.253 "data_size": 65536 00:08:35.253 }, 00:08:35.253 { 00:08:35.253 "name": "BaseBdev2", 00:08:35.253 "uuid": "be635557-927c-4ed8-9090-11712f731b4c", 00:08:35.253 "is_configured": true, 00:08:35.253 "data_offset": 0, 00:08:35.253 "data_size": 65536 00:08:35.253 }, 00:08:35.253 { 00:08:35.253 "name": "BaseBdev3", 00:08:35.253 "uuid": "9c1986f4-2ca8-4cd1-9400-c767420c15c1", 00:08:35.253 "is_configured": true, 00:08:35.253 "data_offset": 0, 00:08:35.253 "data_size": 65536 00:08:35.253 } 00:08:35.253 ] 00:08:35.253 }' 00:08:35.253 20:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:35.253 20:04:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.514 20:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:35.514 20:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:35.514 20:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:35.514 20:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:35.514 20:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:35.514 20:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:35.514 20:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:35.514 20:04:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.514 20:04:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.514 20:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:35.514 [2024-12-08 20:04:07.448632] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:35.514 20:04:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.514 20:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:35.514 "name": "Existed_Raid", 00:08:35.514 "aliases": [ 00:08:35.514 "adc2aa53-9ec7-43fc-a566-9deed06c5bb2" 00:08:35.514 ], 00:08:35.514 "product_name": "Raid Volume", 00:08:35.514 "block_size": 512, 00:08:35.514 "num_blocks": 196608, 00:08:35.514 "uuid": "adc2aa53-9ec7-43fc-a566-9deed06c5bb2", 00:08:35.514 "assigned_rate_limits": { 00:08:35.514 "rw_ios_per_sec": 0, 00:08:35.514 "rw_mbytes_per_sec": 0, 00:08:35.514 "r_mbytes_per_sec": 0, 00:08:35.514 "w_mbytes_per_sec": 0 00:08:35.514 }, 00:08:35.514 "claimed": false, 00:08:35.514 "zoned": false, 00:08:35.515 "supported_io_types": { 00:08:35.515 "read": true, 00:08:35.515 "write": true, 00:08:35.515 "unmap": true, 00:08:35.515 "flush": true, 00:08:35.515 "reset": true, 00:08:35.515 "nvme_admin": false, 00:08:35.515 "nvme_io": false, 00:08:35.515 "nvme_io_md": false, 00:08:35.515 "write_zeroes": true, 00:08:35.515 "zcopy": false, 00:08:35.515 "get_zone_info": false, 00:08:35.515 "zone_management": false, 00:08:35.515 "zone_append": false, 00:08:35.515 "compare": false, 00:08:35.515 "compare_and_write": false, 00:08:35.515 "abort": false, 00:08:35.515 "seek_hole": false, 00:08:35.515 "seek_data": false, 00:08:35.515 "copy": false, 00:08:35.515 "nvme_iov_md": false 00:08:35.515 }, 00:08:35.515 "memory_domains": [ 00:08:35.515 { 00:08:35.515 "dma_device_id": "system", 00:08:35.515 "dma_device_type": 1 00:08:35.515 }, 00:08:35.515 { 00:08:35.515 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:35.515 "dma_device_type": 2 00:08:35.515 }, 00:08:35.515 { 00:08:35.515 "dma_device_id": "system", 00:08:35.515 "dma_device_type": 1 00:08:35.515 }, 00:08:35.515 { 00:08:35.515 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:35.515 "dma_device_type": 2 00:08:35.515 }, 00:08:35.515 { 00:08:35.515 "dma_device_id": "system", 00:08:35.515 "dma_device_type": 1 00:08:35.515 }, 00:08:35.515 { 00:08:35.515 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:35.515 "dma_device_type": 2 00:08:35.515 } 00:08:35.515 ], 00:08:35.515 "driver_specific": { 00:08:35.515 "raid": { 00:08:35.515 "uuid": "adc2aa53-9ec7-43fc-a566-9deed06c5bb2", 00:08:35.515 "strip_size_kb": 64, 00:08:35.515 "state": "online", 00:08:35.515 "raid_level": "concat", 00:08:35.515 "superblock": false, 00:08:35.515 "num_base_bdevs": 3, 00:08:35.515 "num_base_bdevs_discovered": 3, 00:08:35.515 "num_base_bdevs_operational": 3, 00:08:35.515 "base_bdevs_list": [ 00:08:35.515 { 00:08:35.515 "name": "BaseBdev1", 00:08:35.515 "uuid": "2fc9bb2a-b239-4e5e-b5c3-235e70b49f01", 00:08:35.515 "is_configured": true, 00:08:35.515 "data_offset": 0, 00:08:35.515 "data_size": 65536 00:08:35.515 }, 00:08:35.515 { 00:08:35.515 "name": "BaseBdev2", 00:08:35.515 "uuid": "be635557-927c-4ed8-9090-11712f731b4c", 00:08:35.515 "is_configured": true, 00:08:35.515 "data_offset": 0, 00:08:35.515 "data_size": 65536 00:08:35.515 }, 00:08:35.515 { 00:08:35.515 "name": "BaseBdev3", 00:08:35.515 "uuid": "9c1986f4-2ca8-4cd1-9400-c767420c15c1", 00:08:35.515 "is_configured": true, 00:08:35.515 "data_offset": 0, 00:08:35.515 "data_size": 65536 00:08:35.515 } 00:08:35.515 ] 00:08:35.515 } 00:08:35.515 } 00:08:35.515 }' 00:08:35.774 20:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:35.774 20:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:35.774 BaseBdev2 00:08:35.774 BaseBdev3' 00:08:35.774 20:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:35.774 20:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:35.774 20:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:35.774 20:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:35.774 20:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:35.774 20:04:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.774 20:04:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.774 20:04:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.774 20:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:35.774 20:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:35.774 20:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:35.774 20:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:35.774 20:04:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.774 20:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:35.774 20:04:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.774 20:04:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.774 20:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:35.775 20:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:35.775 20:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:35.775 20:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:35.775 20:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:35.775 20:04:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.775 20:04:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.775 20:04:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.775 20:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:35.775 20:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:35.775 20:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:35.775 20:04:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.775 20:04:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.775 [2024-12-08 20:04:07.699918] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:35.775 [2024-12-08 20:04:07.700000] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:35.775 [2024-12-08 20:04:07.700104] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:36.034 20:04:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.034 20:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:36.034 20:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:08:36.034 20:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:36.034 20:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:36.034 20:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:36.034 20:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:08:36.034 20:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:36.034 20:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:36.034 20:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:36.034 20:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:36.034 20:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:36.034 20:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:36.034 20:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:36.034 20:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:36.034 20:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:36.034 20:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:36.034 20:04:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.034 20:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:36.034 20:04:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.034 20:04:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.034 20:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:36.034 "name": "Existed_Raid", 00:08:36.034 "uuid": "adc2aa53-9ec7-43fc-a566-9deed06c5bb2", 00:08:36.034 "strip_size_kb": 64, 00:08:36.034 "state": "offline", 00:08:36.034 "raid_level": "concat", 00:08:36.034 "superblock": false, 00:08:36.034 "num_base_bdevs": 3, 00:08:36.034 "num_base_bdevs_discovered": 2, 00:08:36.034 "num_base_bdevs_operational": 2, 00:08:36.034 "base_bdevs_list": [ 00:08:36.034 { 00:08:36.034 "name": null, 00:08:36.034 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:36.034 "is_configured": false, 00:08:36.034 "data_offset": 0, 00:08:36.034 "data_size": 65536 00:08:36.034 }, 00:08:36.034 { 00:08:36.034 "name": "BaseBdev2", 00:08:36.034 "uuid": "be635557-927c-4ed8-9090-11712f731b4c", 00:08:36.034 "is_configured": true, 00:08:36.034 "data_offset": 0, 00:08:36.034 "data_size": 65536 00:08:36.034 }, 00:08:36.034 { 00:08:36.034 "name": "BaseBdev3", 00:08:36.034 "uuid": "9c1986f4-2ca8-4cd1-9400-c767420c15c1", 00:08:36.034 "is_configured": true, 00:08:36.034 "data_offset": 0, 00:08:36.034 "data_size": 65536 00:08:36.034 } 00:08:36.034 ] 00:08:36.034 }' 00:08:36.034 20:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:36.034 20:04:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.294 20:04:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:36.294 20:04:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:36.294 20:04:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:36.294 20:04:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.294 20:04:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.294 20:04:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:36.294 20:04:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.294 20:04:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:36.294 20:04:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:36.294 20:04:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:36.294 20:04:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.294 20:04:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.294 [2024-12-08 20:04:08.247506] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:36.553 20:04:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.553 20:04:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:36.553 20:04:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:36.553 20:04:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:36.554 20:04:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.554 20:04:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:36.554 20:04:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.554 20:04:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.554 20:04:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:36.554 20:04:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:36.554 20:04:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:36.554 20:04:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.554 20:04:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.554 [2024-12-08 20:04:08.400215] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:36.554 [2024-12-08 20:04:08.400332] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:36.554 20:04:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.554 20:04:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:36.554 20:04:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:36.554 20:04:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:36.554 20:04:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:36.554 20:04:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.554 20:04:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.554 20:04:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.813 20:04:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:36.813 20:04:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:36.813 20:04:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:36.813 20:04:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:36.813 20:04:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:36.813 20:04:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:36.813 20:04:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.813 20:04:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.813 BaseBdev2 00:08:36.813 20:04:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.813 20:04:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:36.813 20:04:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:36.813 20:04:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:36.813 20:04:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:36.813 20:04:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:36.813 20:04:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:36.813 20:04:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:36.813 20:04:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.813 20:04:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.813 20:04:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.813 20:04:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:36.813 20:04:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.813 20:04:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.813 [ 00:08:36.813 { 00:08:36.813 "name": "BaseBdev2", 00:08:36.813 "aliases": [ 00:08:36.813 "2433e2b6-9182-4df4-ae80-faa1f6d11d41" 00:08:36.813 ], 00:08:36.813 "product_name": "Malloc disk", 00:08:36.813 "block_size": 512, 00:08:36.813 "num_blocks": 65536, 00:08:36.813 "uuid": "2433e2b6-9182-4df4-ae80-faa1f6d11d41", 00:08:36.813 "assigned_rate_limits": { 00:08:36.813 "rw_ios_per_sec": 0, 00:08:36.813 "rw_mbytes_per_sec": 0, 00:08:36.813 "r_mbytes_per_sec": 0, 00:08:36.813 "w_mbytes_per_sec": 0 00:08:36.814 }, 00:08:36.814 "claimed": false, 00:08:36.814 "zoned": false, 00:08:36.814 "supported_io_types": { 00:08:36.814 "read": true, 00:08:36.814 "write": true, 00:08:36.814 "unmap": true, 00:08:36.814 "flush": true, 00:08:36.814 "reset": true, 00:08:36.814 "nvme_admin": false, 00:08:36.814 "nvme_io": false, 00:08:36.814 "nvme_io_md": false, 00:08:36.814 "write_zeroes": true, 00:08:36.814 "zcopy": true, 00:08:36.814 "get_zone_info": false, 00:08:36.814 "zone_management": false, 00:08:36.814 "zone_append": false, 00:08:36.814 "compare": false, 00:08:36.814 "compare_and_write": false, 00:08:36.814 "abort": true, 00:08:36.814 "seek_hole": false, 00:08:36.814 "seek_data": false, 00:08:36.814 "copy": true, 00:08:36.814 "nvme_iov_md": false 00:08:36.814 }, 00:08:36.814 "memory_domains": [ 00:08:36.814 { 00:08:36.814 "dma_device_id": "system", 00:08:36.814 "dma_device_type": 1 00:08:36.814 }, 00:08:36.814 { 00:08:36.814 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:36.814 "dma_device_type": 2 00:08:36.814 } 00:08:36.814 ], 00:08:36.814 "driver_specific": {} 00:08:36.814 } 00:08:36.814 ] 00:08:36.814 20:04:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.814 20:04:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:36.814 20:04:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:36.814 20:04:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:36.814 20:04:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:36.814 20:04:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.814 20:04:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.814 BaseBdev3 00:08:36.814 20:04:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.814 20:04:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:36.814 20:04:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:36.814 20:04:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:36.814 20:04:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:36.814 20:04:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:36.814 20:04:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:36.814 20:04:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:36.814 20:04:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.814 20:04:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.814 20:04:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.814 20:04:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:36.814 20:04:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.814 20:04:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.814 [ 00:08:36.814 { 00:08:36.814 "name": "BaseBdev3", 00:08:36.814 "aliases": [ 00:08:36.814 "dafb639b-fc21-486e-882b-095987f3394a" 00:08:36.814 ], 00:08:36.814 "product_name": "Malloc disk", 00:08:36.814 "block_size": 512, 00:08:36.814 "num_blocks": 65536, 00:08:36.814 "uuid": "dafb639b-fc21-486e-882b-095987f3394a", 00:08:36.814 "assigned_rate_limits": { 00:08:36.814 "rw_ios_per_sec": 0, 00:08:36.814 "rw_mbytes_per_sec": 0, 00:08:36.814 "r_mbytes_per_sec": 0, 00:08:36.814 "w_mbytes_per_sec": 0 00:08:36.814 }, 00:08:36.814 "claimed": false, 00:08:36.814 "zoned": false, 00:08:36.814 "supported_io_types": { 00:08:36.814 "read": true, 00:08:36.814 "write": true, 00:08:36.814 "unmap": true, 00:08:36.814 "flush": true, 00:08:36.814 "reset": true, 00:08:36.814 "nvme_admin": false, 00:08:36.814 "nvme_io": false, 00:08:36.814 "nvme_io_md": false, 00:08:36.814 "write_zeroes": true, 00:08:36.814 "zcopy": true, 00:08:36.814 "get_zone_info": false, 00:08:36.814 "zone_management": false, 00:08:36.814 "zone_append": false, 00:08:36.814 "compare": false, 00:08:36.814 "compare_and_write": false, 00:08:36.814 "abort": true, 00:08:36.814 "seek_hole": false, 00:08:36.814 "seek_data": false, 00:08:36.814 "copy": true, 00:08:36.814 "nvme_iov_md": false 00:08:36.814 }, 00:08:36.814 "memory_domains": [ 00:08:36.814 { 00:08:36.814 "dma_device_id": "system", 00:08:36.814 "dma_device_type": 1 00:08:36.814 }, 00:08:36.814 { 00:08:36.814 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:36.814 "dma_device_type": 2 00:08:36.814 } 00:08:36.814 ], 00:08:36.814 "driver_specific": {} 00:08:36.814 } 00:08:36.814 ] 00:08:36.814 20:04:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.814 20:04:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:36.814 20:04:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:36.814 20:04:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:36.814 20:04:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:36.814 20:04:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.814 20:04:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.814 [2024-12-08 20:04:08.707786] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:36.814 [2024-12-08 20:04:08.707833] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:36.814 [2024-12-08 20:04:08.707854] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:36.814 [2024-12-08 20:04:08.709579] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:36.814 20:04:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.814 20:04:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:36.814 20:04:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:36.814 20:04:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:36.814 20:04:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:36.814 20:04:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:36.814 20:04:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:36.814 20:04:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:36.814 20:04:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:36.814 20:04:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:36.814 20:04:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:36.814 20:04:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:36.814 20:04:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:36.814 20:04:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.814 20:04:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.814 20:04:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.814 20:04:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:36.814 "name": "Existed_Raid", 00:08:36.814 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:36.814 "strip_size_kb": 64, 00:08:36.814 "state": "configuring", 00:08:36.814 "raid_level": "concat", 00:08:36.814 "superblock": false, 00:08:36.814 "num_base_bdevs": 3, 00:08:36.814 "num_base_bdevs_discovered": 2, 00:08:36.814 "num_base_bdevs_operational": 3, 00:08:36.814 "base_bdevs_list": [ 00:08:36.814 { 00:08:36.814 "name": "BaseBdev1", 00:08:36.814 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:36.814 "is_configured": false, 00:08:36.814 "data_offset": 0, 00:08:36.814 "data_size": 0 00:08:36.814 }, 00:08:36.814 { 00:08:36.814 "name": "BaseBdev2", 00:08:36.814 "uuid": "2433e2b6-9182-4df4-ae80-faa1f6d11d41", 00:08:36.814 "is_configured": true, 00:08:36.814 "data_offset": 0, 00:08:36.814 "data_size": 65536 00:08:36.814 }, 00:08:36.814 { 00:08:36.814 "name": "BaseBdev3", 00:08:36.814 "uuid": "dafb639b-fc21-486e-882b-095987f3394a", 00:08:36.814 "is_configured": true, 00:08:36.814 "data_offset": 0, 00:08:36.814 "data_size": 65536 00:08:36.814 } 00:08:36.814 ] 00:08:36.814 }' 00:08:36.814 20:04:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:36.814 20:04:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.382 20:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:37.382 20:04:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.382 20:04:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.382 [2024-12-08 20:04:09.143110] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:37.382 20:04:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.382 20:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:37.382 20:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:37.382 20:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:37.382 20:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:37.382 20:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:37.382 20:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:37.382 20:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:37.382 20:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:37.382 20:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:37.382 20:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:37.382 20:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:37.382 20:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:37.382 20:04:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.382 20:04:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.382 20:04:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.382 20:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:37.382 "name": "Existed_Raid", 00:08:37.382 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:37.382 "strip_size_kb": 64, 00:08:37.382 "state": "configuring", 00:08:37.382 "raid_level": "concat", 00:08:37.382 "superblock": false, 00:08:37.382 "num_base_bdevs": 3, 00:08:37.382 "num_base_bdevs_discovered": 1, 00:08:37.382 "num_base_bdevs_operational": 3, 00:08:37.382 "base_bdevs_list": [ 00:08:37.382 { 00:08:37.382 "name": "BaseBdev1", 00:08:37.382 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:37.382 "is_configured": false, 00:08:37.382 "data_offset": 0, 00:08:37.382 "data_size": 0 00:08:37.382 }, 00:08:37.382 { 00:08:37.382 "name": null, 00:08:37.382 "uuid": "2433e2b6-9182-4df4-ae80-faa1f6d11d41", 00:08:37.382 "is_configured": false, 00:08:37.382 "data_offset": 0, 00:08:37.382 "data_size": 65536 00:08:37.382 }, 00:08:37.382 { 00:08:37.382 "name": "BaseBdev3", 00:08:37.382 "uuid": "dafb639b-fc21-486e-882b-095987f3394a", 00:08:37.382 "is_configured": true, 00:08:37.382 "data_offset": 0, 00:08:37.382 "data_size": 65536 00:08:37.382 } 00:08:37.382 ] 00:08:37.382 }' 00:08:37.382 20:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:37.382 20:04:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.641 20:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:37.641 20:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:37.641 20:04:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.641 20:04:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.641 20:04:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.902 20:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:37.902 20:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:37.902 20:04:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.902 20:04:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.902 [2024-12-08 20:04:09.682759] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:37.902 BaseBdev1 00:08:37.902 20:04:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.902 20:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:37.902 20:04:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:37.902 20:04:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:37.902 20:04:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:37.902 20:04:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:37.902 20:04:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:37.902 20:04:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:37.902 20:04:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.902 20:04:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.902 20:04:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.902 20:04:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:37.902 20:04:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.902 20:04:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.902 [ 00:08:37.902 { 00:08:37.902 "name": "BaseBdev1", 00:08:37.902 "aliases": [ 00:08:37.902 "d8afb207-9688-421e-8d0a-940da21e96a2" 00:08:37.902 ], 00:08:37.902 "product_name": "Malloc disk", 00:08:37.902 "block_size": 512, 00:08:37.902 "num_blocks": 65536, 00:08:37.902 "uuid": "d8afb207-9688-421e-8d0a-940da21e96a2", 00:08:37.902 "assigned_rate_limits": { 00:08:37.902 "rw_ios_per_sec": 0, 00:08:37.902 "rw_mbytes_per_sec": 0, 00:08:37.902 "r_mbytes_per_sec": 0, 00:08:37.902 "w_mbytes_per_sec": 0 00:08:37.902 }, 00:08:37.902 "claimed": true, 00:08:37.902 "claim_type": "exclusive_write", 00:08:37.902 "zoned": false, 00:08:37.902 "supported_io_types": { 00:08:37.902 "read": true, 00:08:37.902 "write": true, 00:08:37.902 "unmap": true, 00:08:37.902 "flush": true, 00:08:37.902 "reset": true, 00:08:37.902 "nvme_admin": false, 00:08:37.902 "nvme_io": false, 00:08:37.902 "nvme_io_md": false, 00:08:37.902 "write_zeroes": true, 00:08:37.902 "zcopy": true, 00:08:37.902 "get_zone_info": false, 00:08:37.902 "zone_management": false, 00:08:37.902 "zone_append": false, 00:08:37.902 "compare": false, 00:08:37.902 "compare_and_write": false, 00:08:37.902 "abort": true, 00:08:37.902 "seek_hole": false, 00:08:37.902 "seek_data": false, 00:08:37.902 "copy": true, 00:08:37.902 "nvme_iov_md": false 00:08:37.902 }, 00:08:37.902 "memory_domains": [ 00:08:37.902 { 00:08:37.902 "dma_device_id": "system", 00:08:37.902 "dma_device_type": 1 00:08:37.902 }, 00:08:37.902 { 00:08:37.902 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:37.902 "dma_device_type": 2 00:08:37.902 } 00:08:37.902 ], 00:08:37.902 "driver_specific": {} 00:08:37.902 } 00:08:37.902 ] 00:08:37.902 20:04:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.902 20:04:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:37.902 20:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:37.902 20:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:37.902 20:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:37.902 20:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:37.902 20:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:37.902 20:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:37.902 20:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:37.902 20:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:37.903 20:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:37.903 20:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:37.903 20:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:37.903 20:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:37.903 20:04:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.903 20:04:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.903 20:04:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.903 20:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:37.903 "name": "Existed_Raid", 00:08:37.903 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:37.903 "strip_size_kb": 64, 00:08:37.903 "state": "configuring", 00:08:37.903 "raid_level": "concat", 00:08:37.903 "superblock": false, 00:08:37.903 "num_base_bdevs": 3, 00:08:37.903 "num_base_bdevs_discovered": 2, 00:08:37.903 "num_base_bdevs_operational": 3, 00:08:37.903 "base_bdevs_list": [ 00:08:37.903 { 00:08:37.903 "name": "BaseBdev1", 00:08:37.903 "uuid": "d8afb207-9688-421e-8d0a-940da21e96a2", 00:08:37.903 "is_configured": true, 00:08:37.903 "data_offset": 0, 00:08:37.903 "data_size": 65536 00:08:37.903 }, 00:08:37.903 { 00:08:37.903 "name": null, 00:08:37.903 "uuid": "2433e2b6-9182-4df4-ae80-faa1f6d11d41", 00:08:37.903 "is_configured": false, 00:08:37.903 "data_offset": 0, 00:08:37.903 "data_size": 65536 00:08:37.903 }, 00:08:37.903 { 00:08:37.903 "name": "BaseBdev3", 00:08:37.903 "uuid": "dafb639b-fc21-486e-882b-095987f3394a", 00:08:37.903 "is_configured": true, 00:08:37.903 "data_offset": 0, 00:08:37.903 "data_size": 65536 00:08:37.903 } 00:08:37.903 ] 00:08:37.903 }' 00:08:37.903 20:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:37.903 20:04:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.164 20:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:38.165 20:04:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.165 20:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:38.165 20:04:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.165 20:04:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.165 20:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:38.165 20:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:38.165 20:04:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.165 20:04:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.165 [2024-12-08 20:04:10.118117] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:38.165 20:04:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.165 20:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:38.165 20:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:38.165 20:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:38.165 20:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:38.165 20:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:38.165 20:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:38.165 20:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:38.165 20:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:38.165 20:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:38.165 20:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:38.165 20:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:38.165 20:04:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.165 20:04:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.165 20:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:38.443 20:04:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.443 20:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:38.443 "name": "Existed_Raid", 00:08:38.443 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:38.443 "strip_size_kb": 64, 00:08:38.443 "state": "configuring", 00:08:38.443 "raid_level": "concat", 00:08:38.443 "superblock": false, 00:08:38.443 "num_base_bdevs": 3, 00:08:38.443 "num_base_bdevs_discovered": 1, 00:08:38.443 "num_base_bdevs_operational": 3, 00:08:38.443 "base_bdevs_list": [ 00:08:38.443 { 00:08:38.443 "name": "BaseBdev1", 00:08:38.443 "uuid": "d8afb207-9688-421e-8d0a-940da21e96a2", 00:08:38.443 "is_configured": true, 00:08:38.443 "data_offset": 0, 00:08:38.443 "data_size": 65536 00:08:38.443 }, 00:08:38.443 { 00:08:38.443 "name": null, 00:08:38.443 "uuid": "2433e2b6-9182-4df4-ae80-faa1f6d11d41", 00:08:38.443 "is_configured": false, 00:08:38.443 "data_offset": 0, 00:08:38.443 "data_size": 65536 00:08:38.443 }, 00:08:38.443 { 00:08:38.443 "name": null, 00:08:38.443 "uuid": "dafb639b-fc21-486e-882b-095987f3394a", 00:08:38.443 "is_configured": false, 00:08:38.443 "data_offset": 0, 00:08:38.443 "data_size": 65536 00:08:38.443 } 00:08:38.443 ] 00:08:38.443 }' 00:08:38.443 20:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:38.443 20:04:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.724 20:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:38.724 20:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:38.724 20:04:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.724 20:04:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.724 20:04:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.724 20:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:38.724 20:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:38.724 20:04:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.724 20:04:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.724 [2024-12-08 20:04:10.585333] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:38.724 20:04:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.724 20:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:38.724 20:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:38.724 20:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:38.724 20:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:38.724 20:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:38.724 20:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:38.724 20:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:38.724 20:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:38.724 20:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:38.724 20:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:38.724 20:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:38.724 20:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:38.724 20:04:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.724 20:04:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.724 20:04:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.724 20:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:38.724 "name": "Existed_Raid", 00:08:38.724 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:38.724 "strip_size_kb": 64, 00:08:38.724 "state": "configuring", 00:08:38.724 "raid_level": "concat", 00:08:38.724 "superblock": false, 00:08:38.724 "num_base_bdevs": 3, 00:08:38.724 "num_base_bdevs_discovered": 2, 00:08:38.724 "num_base_bdevs_operational": 3, 00:08:38.724 "base_bdevs_list": [ 00:08:38.724 { 00:08:38.724 "name": "BaseBdev1", 00:08:38.724 "uuid": "d8afb207-9688-421e-8d0a-940da21e96a2", 00:08:38.724 "is_configured": true, 00:08:38.724 "data_offset": 0, 00:08:38.725 "data_size": 65536 00:08:38.725 }, 00:08:38.725 { 00:08:38.725 "name": null, 00:08:38.725 "uuid": "2433e2b6-9182-4df4-ae80-faa1f6d11d41", 00:08:38.725 "is_configured": false, 00:08:38.725 "data_offset": 0, 00:08:38.725 "data_size": 65536 00:08:38.725 }, 00:08:38.725 { 00:08:38.725 "name": "BaseBdev3", 00:08:38.725 "uuid": "dafb639b-fc21-486e-882b-095987f3394a", 00:08:38.725 "is_configured": true, 00:08:38.725 "data_offset": 0, 00:08:38.725 "data_size": 65536 00:08:38.725 } 00:08:38.725 ] 00:08:38.725 }' 00:08:38.725 20:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:38.725 20:04:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.312 20:04:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:39.312 20:04:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.312 20:04:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.312 20:04:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:39.312 20:04:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.312 20:04:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:39.312 20:04:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:39.312 20:04:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.312 20:04:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.312 [2024-12-08 20:04:11.060531] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:39.312 20:04:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.312 20:04:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:39.312 20:04:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:39.312 20:04:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:39.312 20:04:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:39.312 20:04:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:39.312 20:04:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:39.312 20:04:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:39.312 20:04:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:39.312 20:04:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:39.312 20:04:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:39.312 20:04:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:39.312 20:04:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.312 20:04:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:39.312 20:04:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.312 20:04:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.312 20:04:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:39.312 "name": "Existed_Raid", 00:08:39.312 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:39.312 "strip_size_kb": 64, 00:08:39.312 "state": "configuring", 00:08:39.312 "raid_level": "concat", 00:08:39.312 "superblock": false, 00:08:39.312 "num_base_bdevs": 3, 00:08:39.312 "num_base_bdevs_discovered": 1, 00:08:39.312 "num_base_bdevs_operational": 3, 00:08:39.312 "base_bdevs_list": [ 00:08:39.312 { 00:08:39.312 "name": null, 00:08:39.312 "uuid": "d8afb207-9688-421e-8d0a-940da21e96a2", 00:08:39.312 "is_configured": false, 00:08:39.312 "data_offset": 0, 00:08:39.312 "data_size": 65536 00:08:39.312 }, 00:08:39.312 { 00:08:39.313 "name": null, 00:08:39.313 "uuid": "2433e2b6-9182-4df4-ae80-faa1f6d11d41", 00:08:39.313 "is_configured": false, 00:08:39.313 "data_offset": 0, 00:08:39.313 "data_size": 65536 00:08:39.313 }, 00:08:39.313 { 00:08:39.313 "name": "BaseBdev3", 00:08:39.313 "uuid": "dafb639b-fc21-486e-882b-095987f3394a", 00:08:39.313 "is_configured": true, 00:08:39.313 "data_offset": 0, 00:08:39.313 "data_size": 65536 00:08:39.313 } 00:08:39.313 ] 00:08:39.313 }' 00:08:39.313 20:04:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:39.313 20:04:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.883 20:04:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:39.883 20:04:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:39.883 20:04:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.883 20:04:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.883 20:04:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.883 20:04:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:39.883 20:04:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:39.883 20:04:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.883 20:04:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.883 [2024-12-08 20:04:11.644760] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:39.883 20:04:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.883 20:04:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:39.883 20:04:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:39.883 20:04:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:39.883 20:04:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:39.883 20:04:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:39.883 20:04:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:39.883 20:04:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:39.883 20:04:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:39.883 20:04:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:39.883 20:04:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:39.883 20:04:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:39.883 20:04:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.883 20:04:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.883 20:04:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:39.883 20:04:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.883 20:04:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:39.883 "name": "Existed_Raid", 00:08:39.883 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:39.883 "strip_size_kb": 64, 00:08:39.883 "state": "configuring", 00:08:39.883 "raid_level": "concat", 00:08:39.883 "superblock": false, 00:08:39.883 "num_base_bdevs": 3, 00:08:39.883 "num_base_bdevs_discovered": 2, 00:08:39.883 "num_base_bdevs_operational": 3, 00:08:39.883 "base_bdevs_list": [ 00:08:39.883 { 00:08:39.883 "name": null, 00:08:39.883 "uuid": "d8afb207-9688-421e-8d0a-940da21e96a2", 00:08:39.883 "is_configured": false, 00:08:39.883 "data_offset": 0, 00:08:39.883 "data_size": 65536 00:08:39.883 }, 00:08:39.883 { 00:08:39.883 "name": "BaseBdev2", 00:08:39.883 "uuid": "2433e2b6-9182-4df4-ae80-faa1f6d11d41", 00:08:39.883 "is_configured": true, 00:08:39.883 "data_offset": 0, 00:08:39.883 "data_size": 65536 00:08:39.883 }, 00:08:39.883 { 00:08:39.883 "name": "BaseBdev3", 00:08:39.883 "uuid": "dafb639b-fc21-486e-882b-095987f3394a", 00:08:39.883 "is_configured": true, 00:08:39.883 "data_offset": 0, 00:08:39.883 "data_size": 65536 00:08:39.883 } 00:08:39.883 ] 00:08:39.883 }' 00:08:39.883 20:04:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:39.883 20:04:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.143 20:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:40.143 20:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:40.143 20:04:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.143 20:04:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.143 20:04:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.404 20:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:40.404 20:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:40.404 20:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:40.404 20:04:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.404 20:04:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.404 20:04:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.404 20:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u d8afb207-9688-421e-8d0a-940da21e96a2 00:08:40.404 20:04:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.404 20:04:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.404 [2024-12-08 20:04:12.220357] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:40.404 [2024-12-08 20:04:12.220401] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:40.404 [2024-12-08 20:04:12.220409] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:40.404 [2024-12-08 20:04:12.220642] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:40.404 [2024-12-08 20:04:12.220783] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:40.404 [2024-12-08 20:04:12.220793] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:08:40.404 [2024-12-08 20:04:12.221085] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:40.404 NewBaseBdev 00:08:40.404 20:04:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.404 20:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:40.404 20:04:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:08:40.404 20:04:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:40.404 20:04:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:40.404 20:04:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:40.404 20:04:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:40.404 20:04:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:40.404 20:04:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.404 20:04:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.404 20:04:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.404 20:04:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:40.404 20:04:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.404 20:04:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.404 [ 00:08:40.404 { 00:08:40.404 "name": "NewBaseBdev", 00:08:40.404 "aliases": [ 00:08:40.404 "d8afb207-9688-421e-8d0a-940da21e96a2" 00:08:40.404 ], 00:08:40.404 "product_name": "Malloc disk", 00:08:40.404 "block_size": 512, 00:08:40.404 "num_blocks": 65536, 00:08:40.404 "uuid": "d8afb207-9688-421e-8d0a-940da21e96a2", 00:08:40.404 "assigned_rate_limits": { 00:08:40.404 "rw_ios_per_sec": 0, 00:08:40.404 "rw_mbytes_per_sec": 0, 00:08:40.404 "r_mbytes_per_sec": 0, 00:08:40.404 "w_mbytes_per_sec": 0 00:08:40.404 }, 00:08:40.404 "claimed": true, 00:08:40.404 "claim_type": "exclusive_write", 00:08:40.404 "zoned": false, 00:08:40.404 "supported_io_types": { 00:08:40.404 "read": true, 00:08:40.404 "write": true, 00:08:40.404 "unmap": true, 00:08:40.404 "flush": true, 00:08:40.404 "reset": true, 00:08:40.404 "nvme_admin": false, 00:08:40.404 "nvme_io": false, 00:08:40.404 "nvme_io_md": false, 00:08:40.404 "write_zeroes": true, 00:08:40.404 "zcopy": true, 00:08:40.404 "get_zone_info": false, 00:08:40.404 "zone_management": false, 00:08:40.404 "zone_append": false, 00:08:40.404 "compare": false, 00:08:40.404 "compare_and_write": false, 00:08:40.404 "abort": true, 00:08:40.404 "seek_hole": false, 00:08:40.404 "seek_data": false, 00:08:40.404 "copy": true, 00:08:40.404 "nvme_iov_md": false 00:08:40.404 }, 00:08:40.404 "memory_domains": [ 00:08:40.404 { 00:08:40.404 "dma_device_id": "system", 00:08:40.404 "dma_device_type": 1 00:08:40.404 }, 00:08:40.404 { 00:08:40.404 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:40.404 "dma_device_type": 2 00:08:40.404 } 00:08:40.404 ], 00:08:40.404 "driver_specific": {} 00:08:40.404 } 00:08:40.404 ] 00:08:40.404 20:04:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.404 20:04:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:40.404 20:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:08:40.404 20:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:40.404 20:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:40.404 20:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:40.404 20:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:40.404 20:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:40.404 20:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:40.404 20:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:40.404 20:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:40.404 20:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:40.404 20:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:40.404 20:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:40.404 20:04:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.404 20:04:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.404 20:04:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.404 20:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:40.404 "name": "Existed_Raid", 00:08:40.404 "uuid": "9bf4bf82-4c89-467b-beec-43d5b48471cb", 00:08:40.404 "strip_size_kb": 64, 00:08:40.404 "state": "online", 00:08:40.404 "raid_level": "concat", 00:08:40.404 "superblock": false, 00:08:40.404 "num_base_bdevs": 3, 00:08:40.404 "num_base_bdevs_discovered": 3, 00:08:40.404 "num_base_bdevs_operational": 3, 00:08:40.404 "base_bdevs_list": [ 00:08:40.404 { 00:08:40.404 "name": "NewBaseBdev", 00:08:40.404 "uuid": "d8afb207-9688-421e-8d0a-940da21e96a2", 00:08:40.404 "is_configured": true, 00:08:40.404 "data_offset": 0, 00:08:40.404 "data_size": 65536 00:08:40.404 }, 00:08:40.404 { 00:08:40.404 "name": "BaseBdev2", 00:08:40.404 "uuid": "2433e2b6-9182-4df4-ae80-faa1f6d11d41", 00:08:40.404 "is_configured": true, 00:08:40.404 "data_offset": 0, 00:08:40.404 "data_size": 65536 00:08:40.404 }, 00:08:40.404 { 00:08:40.404 "name": "BaseBdev3", 00:08:40.404 "uuid": "dafb639b-fc21-486e-882b-095987f3394a", 00:08:40.404 "is_configured": true, 00:08:40.404 "data_offset": 0, 00:08:40.404 "data_size": 65536 00:08:40.404 } 00:08:40.404 ] 00:08:40.404 }' 00:08:40.404 20:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:40.404 20:04:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.974 20:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:40.974 20:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:40.974 20:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:40.974 20:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:40.974 20:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:40.974 20:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:40.974 20:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:40.974 20:04:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.974 20:04:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.974 20:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:40.974 [2024-12-08 20:04:12.739980] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:40.974 20:04:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.974 20:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:40.974 "name": "Existed_Raid", 00:08:40.974 "aliases": [ 00:08:40.974 "9bf4bf82-4c89-467b-beec-43d5b48471cb" 00:08:40.974 ], 00:08:40.974 "product_name": "Raid Volume", 00:08:40.974 "block_size": 512, 00:08:40.974 "num_blocks": 196608, 00:08:40.974 "uuid": "9bf4bf82-4c89-467b-beec-43d5b48471cb", 00:08:40.974 "assigned_rate_limits": { 00:08:40.974 "rw_ios_per_sec": 0, 00:08:40.974 "rw_mbytes_per_sec": 0, 00:08:40.974 "r_mbytes_per_sec": 0, 00:08:40.974 "w_mbytes_per_sec": 0 00:08:40.974 }, 00:08:40.974 "claimed": false, 00:08:40.974 "zoned": false, 00:08:40.974 "supported_io_types": { 00:08:40.974 "read": true, 00:08:40.974 "write": true, 00:08:40.974 "unmap": true, 00:08:40.974 "flush": true, 00:08:40.974 "reset": true, 00:08:40.974 "nvme_admin": false, 00:08:40.974 "nvme_io": false, 00:08:40.974 "nvme_io_md": false, 00:08:40.974 "write_zeroes": true, 00:08:40.974 "zcopy": false, 00:08:40.974 "get_zone_info": false, 00:08:40.974 "zone_management": false, 00:08:40.974 "zone_append": false, 00:08:40.974 "compare": false, 00:08:40.974 "compare_and_write": false, 00:08:40.974 "abort": false, 00:08:40.974 "seek_hole": false, 00:08:40.974 "seek_data": false, 00:08:40.974 "copy": false, 00:08:40.974 "nvme_iov_md": false 00:08:40.974 }, 00:08:40.974 "memory_domains": [ 00:08:40.974 { 00:08:40.974 "dma_device_id": "system", 00:08:40.974 "dma_device_type": 1 00:08:40.974 }, 00:08:40.974 { 00:08:40.974 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:40.974 "dma_device_type": 2 00:08:40.974 }, 00:08:40.974 { 00:08:40.974 "dma_device_id": "system", 00:08:40.974 "dma_device_type": 1 00:08:40.974 }, 00:08:40.974 { 00:08:40.974 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:40.974 "dma_device_type": 2 00:08:40.974 }, 00:08:40.974 { 00:08:40.974 "dma_device_id": "system", 00:08:40.974 "dma_device_type": 1 00:08:40.974 }, 00:08:40.974 { 00:08:40.974 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:40.974 "dma_device_type": 2 00:08:40.974 } 00:08:40.974 ], 00:08:40.974 "driver_specific": { 00:08:40.974 "raid": { 00:08:40.974 "uuid": "9bf4bf82-4c89-467b-beec-43d5b48471cb", 00:08:40.974 "strip_size_kb": 64, 00:08:40.974 "state": "online", 00:08:40.974 "raid_level": "concat", 00:08:40.974 "superblock": false, 00:08:40.974 "num_base_bdevs": 3, 00:08:40.974 "num_base_bdevs_discovered": 3, 00:08:40.974 "num_base_bdevs_operational": 3, 00:08:40.974 "base_bdevs_list": [ 00:08:40.974 { 00:08:40.974 "name": "NewBaseBdev", 00:08:40.974 "uuid": "d8afb207-9688-421e-8d0a-940da21e96a2", 00:08:40.974 "is_configured": true, 00:08:40.974 "data_offset": 0, 00:08:40.974 "data_size": 65536 00:08:40.974 }, 00:08:40.974 { 00:08:40.974 "name": "BaseBdev2", 00:08:40.974 "uuid": "2433e2b6-9182-4df4-ae80-faa1f6d11d41", 00:08:40.974 "is_configured": true, 00:08:40.974 "data_offset": 0, 00:08:40.974 "data_size": 65536 00:08:40.974 }, 00:08:40.974 { 00:08:40.974 "name": "BaseBdev3", 00:08:40.974 "uuid": "dafb639b-fc21-486e-882b-095987f3394a", 00:08:40.974 "is_configured": true, 00:08:40.974 "data_offset": 0, 00:08:40.974 "data_size": 65536 00:08:40.974 } 00:08:40.974 ] 00:08:40.974 } 00:08:40.974 } 00:08:40.974 }' 00:08:40.974 20:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:40.974 20:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:40.974 BaseBdev2 00:08:40.974 BaseBdev3' 00:08:40.974 20:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:40.974 20:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:40.974 20:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:40.975 20:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:40.975 20:04:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.975 20:04:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.975 20:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:40.975 20:04:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.975 20:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:40.975 20:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:40.975 20:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:40.975 20:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:40.975 20:04:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.975 20:04:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.975 20:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:40.975 20:04:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.975 20:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:40.975 20:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:40.975 20:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:40.975 20:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:40.975 20:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:40.975 20:04:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.975 20:04:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.235 20:04:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.235 20:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:41.235 20:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:41.235 20:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:41.235 20:04:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.235 20:04:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.235 [2024-12-08 20:04:12.991301] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:41.235 [2024-12-08 20:04:12.991333] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:41.235 [2024-12-08 20:04:12.991413] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:41.235 [2024-12-08 20:04:12.991471] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:41.235 [2024-12-08 20:04:12.991483] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:08:41.235 20:04:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.235 20:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 65440 00:08:41.235 20:04:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 65440 ']' 00:08:41.235 20:04:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 65440 00:08:41.235 20:04:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:08:41.235 20:04:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:41.235 20:04:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65440 00:08:41.235 20:04:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:41.235 20:04:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:41.235 20:04:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65440' 00:08:41.235 killing process with pid 65440 00:08:41.235 20:04:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 65440 00:08:41.235 [2024-12-08 20:04:13.029007] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:41.235 20:04:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 65440 00:08:41.496 [2024-12-08 20:04:13.327833] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:42.437 20:04:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:42.437 00:08:42.437 real 0m10.246s 00:08:42.437 user 0m16.270s 00:08:42.437 sys 0m1.732s 00:08:42.437 20:04:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:42.697 ************************************ 00:08:42.697 END TEST raid_state_function_test 00:08:42.697 ************************************ 00:08:42.697 20:04:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.697 20:04:14 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:08:42.697 20:04:14 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:42.697 20:04:14 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:42.697 20:04:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:42.697 ************************************ 00:08:42.697 START TEST raid_state_function_test_sb 00:08:42.697 ************************************ 00:08:42.697 20:04:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 3 true 00:08:42.697 20:04:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:08:42.697 20:04:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:42.697 20:04:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:42.697 20:04:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:42.697 20:04:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:42.697 20:04:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:42.697 20:04:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:42.697 20:04:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:42.697 20:04:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:42.697 20:04:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:42.697 20:04:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:42.697 20:04:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:42.697 20:04:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:42.697 20:04:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:42.697 20:04:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:42.697 20:04:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:42.697 20:04:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:42.697 20:04:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:42.697 20:04:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:42.697 20:04:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:42.697 20:04:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:42.698 20:04:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:08:42.698 20:04:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:42.698 20:04:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:42.698 20:04:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:42.698 20:04:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:42.698 Process raid pid: 66062 00:08:42.698 20:04:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=66062 00:08:42.698 20:04:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:42.698 20:04:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 66062' 00:08:42.698 20:04:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 66062 00:08:42.698 20:04:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 66062 ']' 00:08:42.698 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:42.698 20:04:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:42.698 20:04:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:42.698 20:04:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:42.698 20:04:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:42.698 20:04:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:42.698 [2024-12-08 20:04:14.577163] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:08:42.698 [2024-12-08 20:04:14.577285] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:42.958 [2024-12-08 20:04:14.754016] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:42.958 [2024-12-08 20:04:14.866874] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:43.217 [2024-12-08 20:04:15.066580] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:43.217 [2024-12-08 20:04:15.066631] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:43.477 20:04:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:43.477 20:04:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:08:43.477 20:04:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:43.477 20:04:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.477 20:04:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:43.477 [2024-12-08 20:04:15.410275] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:43.477 [2024-12-08 20:04:15.410373] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:43.477 [2024-12-08 20:04:15.410389] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:43.477 [2024-12-08 20:04:15.410399] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:43.477 [2024-12-08 20:04:15.410406] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:43.477 [2024-12-08 20:04:15.410415] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:43.477 20:04:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.477 20:04:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:43.477 20:04:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:43.477 20:04:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:43.477 20:04:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:43.477 20:04:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:43.477 20:04:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:43.477 20:04:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:43.477 20:04:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:43.477 20:04:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:43.477 20:04:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:43.477 20:04:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:43.477 20:04:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:43.477 20:04:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.477 20:04:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:43.477 20:04:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.736 20:04:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:43.736 "name": "Existed_Raid", 00:08:43.737 "uuid": "16c63122-28bc-490c-a0f2-aea7cfb82946", 00:08:43.737 "strip_size_kb": 64, 00:08:43.737 "state": "configuring", 00:08:43.737 "raid_level": "concat", 00:08:43.737 "superblock": true, 00:08:43.737 "num_base_bdevs": 3, 00:08:43.737 "num_base_bdevs_discovered": 0, 00:08:43.737 "num_base_bdevs_operational": 3, 00:08:43.737 "base_bdevs_list": [ 00:08:43.737 { 00:08:43.737 "name": "BaseBdev1", 00:08:43.737 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:43.737 "is_configured": false, 00:08:43.737 "data_offset": 0, 00:08:43.737 "data_size": 0 00:08:43.737 }, 00:08:43.737 { 00:08:43.737 "name": "BaseBdev2", 00:08:43.737 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:43.737 "is_configured": false, 00:08:43.737 "data_offset": 0, 00:08:43.737 "data_size": 0 00:08:43.737 }, 00:08:43.737 { 00:08:43.737 "name": "BaseBdev3", 00:08:43.737 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:43.737 "is_configured": false, 00:08:43.737 "data_offset": 0, 00:08:43.737 "data_size": 0 00:08:43.737 } 00:08:43.737 ] 00:08:43.737 }' 00:08:43.737 20:04:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:43.737 20:04:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:43.995 20:04:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:43.995 20:04:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.995 20:04:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:43.995 [2024-12-08 20:04:15.849459] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:43.995 [2024-12-08 20:04:15.849558] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:43.995 20:04:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.995 20:04:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:43.995 20:04:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.995 20:04:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:43.995 [2024-12-08 20:04:15.857464] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:43.995 [2024-12-08 20:04:15.857547] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:43.995 [2024-12-08 20:04:15.857562] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:43.995 [2024-12-08 20:04:15.857572] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:43.995 [2024-12-08 20:04:15.857578] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:43.995 [2024-12-08 20:04:15.857587] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:43.995 20:04:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.995 20:04:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:43.995 20:04:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.995 20:04:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:43.995 [2024-12-08 20:04:15.901490] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:43.995 BaseBdev1 00:08:43.995 20:04:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.995 20:04:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:43.996 20:04:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:43.996 20:04:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:43.996 20:04:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:43.996 20:04:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:43.996 20:04:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:43.996 20:04:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:43.996 20:04:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.996 20:04:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:43.996 20:04:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.996 20:04:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:43.996 20:04:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.996 20:04:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:43.996 [ 00:08:43.996 { 00:08:43.996 "name": "BaseBdev1", 00:08:43.996 "aliases": [ 00:08:43.996 "b8536dc0-be58-4d2c-ae42-0f5561a89b5d" 00:08:43.996 ], 00:08:43.996 "product_name": "Malloc disk", 00:08:43.996 "block_size": 512, 00:08:43.996 "num_blocks": 65536, 00:08:43.996 "uuid": "b8536dc0-be58-4d2c-ae42-0f5561a89b5d", 00:08:43.996 "assigned_rate_limits": { 00:08:43.996 "rw_ios_per_sec": 0, 00:08:43.996 "rw_mbytes_per_sec": 0, 00:08:43.996 "r_mbytes_per_sec": 0, 00:08:43.996 "w_mbytes_per_sec": 0 00:08:43.996 }, 00:08:43.996 "claimed": true, 00:08:43.996 "claim_type": "exclusive_write", 00:08:43.996 "zoned": false, 00:08:43.996 "supported_io_types": { 00:08:43.996 "read": true, 00:08:43.996 "write": true, 00:08:43.996 "unmap": true, 00:08:43.996 "flush": true, 00:08:43.996 "reset": true, 00:08:43.996 "nvme_admin": false, 00:08:43.996 "nvme_io": false, 00:08:43.996 "nvme_io_md": false, 00:08:43.996 "write_zeroes": true, 00:08:43.996 "zcopy": true, 00:08:43.996 "get_zone_info": false, 00:08:43.996 "zone_management": false, 00:08:43.996 "zone_append": false, 00:08:43.996 "compare": false, 00:08:43.996 "compare_and_write": false, 00:08:43.996 "abort": true, 00:08:43.996 "seek_hole": false, 00:08:43.996 "seek_data": false, 00:08:43.996 "copy": true, 00:08:43.996 "nvme_iov_md": false 00:08:43.996 }, 00:08:43.996 "memory_domains": [ 00:08:43.996 { 00:08:43.996 "dma_device_id": "system", 00:08:43.996 "dma_device_type": 1 00:08:43.996 }, 00:08:43.996 { 00:08:43.996 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:43.996 "dma_device_type": 2 00:08:43.996 } 00:08:43.996 ], 00:08:43.996 "driver_specific": {} 00:08:43.996 } 00:08:43.996 ] 00:08:43.996 20:04:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.996 20:04:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:43.996 20:04:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:43.996 20:04:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:43.996 20:04:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:43.996 20:04:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:43.996 20:04:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:43.996 20:04:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:43.996 20:04:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:43.996 20:04:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:43.996 20:04:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:43.996 20:04:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:43.996 20:04:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:43.996 20:04:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.996 20:04:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:43.996 20:04:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:43.996 20:04:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.255 20:04:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:44.255 "name": "Existed_Raid", 00:08:44.255 "uuid": "8d81191b-195e-47eb-8c72-3178014676f5", 00:08:44.255 "strip_size_kb": 64, 00:08:44.255 "state": "configuring", 00:08:44.255 "raid_level": "concat", 00:08:44.255 "superblock": true, 00:08:44.255 "num_base_bdevs": 3, 00:08:44.255 "num_base_bdevs_discovered": 1, 00:08:44.255 "num_base_bdevs_operational": 3, 00:08:44.255 "base_bdevs_list": [ 00:08:44.255 { 00:08:44.255 "name": "BaseBdev1", 00:08:44.255 "uuid": "b8536dc0-be58-4d2c-ae42-0f5561a89b5d", 00:08:44.255 "is_configured": true, 00:08:44.255 "data_offset": 2048, 00:08:44.255 "data_size": 63488 00:08:44.255 }, 00:08:44.255 { 00:08:44.255 "name": "BaseBdev2", 00:08:44.255 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:44.255 "is_configured": false, 00:08:44.255 "data_offset": 0, 00:08:44.255 "data_size": 0 00:08:44.255 }, 00:08:44.255 { 00:08:44.255 "name": "BaseBdev3", 00:08:44.255 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:44.255 "is_configured": false, 00:08:44.255 "data_offset": 0, 00:08:44.255 "data_size": 0 00:08:44.255 } 00:08:44.255 ] 00:08:44.255 }' 00:08:44.255 20:04:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:44.255 20:04:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:44.515 20:04:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:44.515 20:04:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.515 20:04:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:44.515 [2024-12-08 20:04:16.392708] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:44.515 [2024-12-08 20:04:16.392837] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:44.515 20:04:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.515 20:04:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:44.515 20:04:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.515 20:04:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:44.515 [2024-12-08 20:04:16.400747] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:44.515 [2024-12-08 20:04:16.402572] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:44.515 [2024-12-08 20:04:16.402614] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:44.515 [2024-12-08 20:04:16.402624] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:44.515 [2024-12-08 20:04:16.402634] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:44.515 20:04:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.515 20:04:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:44.515 20:04:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:44.515 20:04:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:44.515 20:04:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:44.515 20:04:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:44.515 20:04:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:44.515 20:04:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:44.515 20:04:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:44.515 20:04:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:44.515 20:04:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:44.515 20:04:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:44.515 20:04:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:44.515 20:04:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:44.515 20:04:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:44.515 20:04:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.515 20:04:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:44.515 20:04:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.515 20:04:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:44.515 "name": "Existed_Raid", 00:08:44.515 "uuid": "60950a21-0189-4be9-8aec-6f61522fc02c", 00:08:44.515 "strip_size_kb": 64, 00:08:44.515 "state": "configuring", 00:08:44.515 "raid_level": "concat", 00:08:44.515 "superblock": true, 00:08:44.515 "num_base_bdevs": 3, 00:08:44.515 "num_base_bdevs_discovered": 1, 00:08:44.515 "num_base_bdevs_operational": 3, 00:08:44.515 "base_bdevs_list": [ 00:08:44.515 { 00:08:44.515 "name": "BaseBdev1", 00:08:44.515 "uuid": "b8536dc0-be58-4d2c-ae42-0f5561a89b5d", 00:08:44.515 "is_configured": true, 00:08:44.515 "data_offset": 2048, 00:08:44.515 "data_size": 63488 00:08:44.515 }, 00:08:44.515 { 00:08:44.515 "name": "BaseBdev2", 00:08:44.515 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:44.515 "is_configured": false, 00:08:44.515 "data_offset": 0, 00:08:44.515 "data_size": 0 00:08:44.515 }, 00:08:44.515 { 00:08:44.515 "name": "BaseBdev3", 00:08:44.515 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:44.515 "is_configured": false, 00:08:44.515 "data_offset": 0, 00:08:44.515 "data_size": 0 00:08:44.515 } 00:08:44.515 ] 00:08:44.515 }' 00:08:44.515 20:04:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:44.515 20:04:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.085 20:04:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:45.085 20:04:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.085 20:04:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.085 [2024-12-08 20:04:16.921892] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:45.085 BaseBdev2 00:08:45.085 20:04:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.085 20:04:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:45.085 20:04:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:45.085 20:04:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:45.085 20:04:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:45.085 20:04:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:45.085 20:04:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:45.085 20:04:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:45.085 20:04:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.085 20:04:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.085 20:04:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.085 20:04:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:45.085 20:04:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.085 20:04:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.085 [ 00:08:45.085 { 00:08:45.085 "name": "BaseBdev2", 00:08:45.085 "aliases": [ 00:08:45.085 "69ea2746-0b98-4346-8674-9d4e2d5dff99" 00:08:45.085 ], 00:08:45.085 "product_name": "Malloc disk", 00:08:45.085 "block_size": 512, 00:08:45.085 "num_blocks": 65536, 00:08:45.085 "uuid": "69ea2746-0b98-4346-8674-9d4e2d5dff99", 00:08:45.085 "assigned_rate_limits": { 00:08:45.085 "rw_ios_per_sec": 0, 00:08:45.085 "rw_mbytes_per_sec": 0, 00:08:45.085 "r_mbytes_per_sec": 0, 00:08:45.085 "w_mbytes_per_sec": 0 00:08:45.085 }, 00:08:45.085 "claimed": true, 00:08:45.085 "claim_type": "exclusive_write", 00:08:45.085 "zoned": false, 00:08:45.085 "supported_io_types": { 00:08:45.085 "read": true, 00:08:45.085 "write": true, 00:08:45.085 "unmap": true, 00:08:45.085 "flush": true, 00:08:45.085 "reset": true, 00:08:45.085 "nvme_admin": false, 00:08:45.085 "nvme_io": false, 00:08:45.085 "nvme_io_md": false, 00:08:45.085 "write_zeroes": true, 00:08:45.085 "zcopy": true, 00:08:45.085 "get_zone_info": false, 00:08:45.085 "zone_management": false, 00:08:45.085 "zone_append": false, 00:08:45.085 "compare": false, 00:08:45.085 "compare_and_write": false, 00:08:45.085 "abort": true, 00:08:45.085 "seek_hole": false, 00:08:45.085 "seek_data": false, 00:08:45.085 "copy": true, 00:08:45.085 "nvme_iov_md": false 00:08:45.085 }, 00:08:45.085 "memory_domains": [ 00:08:45.085 { 00:08:45.085 "dma_device_id": "system", 00:08:45.085 "dma_device_type": 1 00:08:45.085 }, 00:08:45.085 { 00:08:45.085 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:45.085 "dma_device_type": 2 00:08:45.085 } 00:08:45.085 ], 00:08:45.085 "driver_specific": {} 00:08:45.085 } 00:08:45.085 ] 00:08:45.085 20:04:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.085 20:04:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:45.085 20:04:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:45.085 20:04:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:45.085 20:04:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:45.085 20:04:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:45.085 20:04:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:45.085 20:04:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:45.085 20:04:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:45.085 20:04:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:45.085 20:04:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:45.085 20:04:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:45.085 20:04:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:45.085 20:04:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:45.085 20:04:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:45.085 20:04:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.085 20:04:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.085 20:04:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:45.085 20:04:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.085 20:04:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:45.085 "name": "Existed_Raid", 00:08:45.085 "uuid": "60950a21-0189-4be9-8aec-6f61522fc02c", 00:08:45.085 "strip_size_kb": 64, 00:08:45.085 "state": "configuring", 00:08:45.085 "raid_level": "concat", 00:08:45.085 "superblock": true, 00:08:45.085 "num_base_bdevs": 3, 00:08:45.085 "num_base_bdevs_discovered": 2, 00:08:45.085 "num_base_bdevs_operational": 3, 00:08:45.085 "base_bdevs_list": [ 00:08:45.085 { 00:08:45.085 "name": "BaseBdev1", 00:08:45.085 "uuid": "b8536dc0-be58-4d2c-ae42-0f5561a89b5d", 00:08:45.085 "is_configured": true, 00:08:45.085 "data_offset": 2048, 00:08:45.085 "data_size": 63488 00:08:45.085 }, 00:08:45.085 { 00:08:45.085 "name": "BaseBdev2", 00:08:45.085 "uuid": "69ea2746-0b98-4346-8674-9d4e2d5dff99", 00:08:45.085 "is_configured": true, 00:08:45.085 "data_offset": 2048, 00:08:45.085 "data_size": 63488 00:08:45.085 }, 00:08:45.085 { 00:08:45.085 "name": "BaseBdev3", 00:08:45.085 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:45.085 "is_configured": false, 00:08:45.085 "data_offset": 0, 00:08:45.085 "data_size": 0 00:08:45.085 } 00:08:45.085 ] 00:08:45.085 }' 00:08:45.085 20:04:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:45.085 20:04:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.656 20:04:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:45.656 20:04:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.656 20:04:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.656 [2024-12-08 20:04:17.447986] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:45.656 [2024-12-08 20:04:17.448249] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:45.656 [2024-12-08 20:04:17.448271] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:45.656 [2024-12-08 20:04:17.448559] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:45.656 [2024-12-08 20:04:17.448754] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:45.656 [2024-12-08 20:04:17.448766] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:45.656 BaseBdev3 00:08:45.656 [2024-12-08 20:04:17.448962] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:45.656 20:04:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.656 20:04:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:45.656 20:04:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:45.656 20:04:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:45.656 20:04:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:45.656 20:04:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:45.656 20:04:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:45.656 20:04:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:45.656 20:04:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.656 20:04:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.656 20:04:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.656 20:04:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:45.656 20:04:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.656 20:04:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.656 [ 00:08:45.656 { 00:08:45.656 "name": "BaseBdev3", 00:08:45.656 "aliases": [ 00:08:45.656 "4c648446-ddd9-4681-ab4d-2ce77f77d92a" 00:08:45.656 ], 00:08:45.656 "product_name": "Malloc disk", 00:08:45.656 "block_size": 512, 00:08:45.656 "num_blocks": 65536, 00:08:45.656 "uuid": "4c648446-ddd9-4681-ab4d-2ce77f77d92a", 00:08:45.656 "assigned_rate_limits": { 00:08:45.656 "rw_ios_per_sec": 0, 00:08:45.656 "rw_mbytes_per_sec": 0, 00:08:45.656 "r_mbytes_per_sec": 0, 00:08:45.656 "w_mbytes_per_sec": 0 00:08:45.656 }, 00:08:45.656 "claimed": true, 00:08:45.656 "claim_type": "exclusive_write", 00:08:45.656 "zoned": false, 00:08:45.656 "supported_io_types": { 00:08:45.656 "read": true, 00:08:45.656 "write": true, 00:08:45.656 "unmap": true, 00:08:45.656 "flush": true, 00:08:45.656 "reset": true, 00:08:45.656 "nvme_admin": false, 00:08:45.656 "nvme_io": false, 00:08:45.656 "nvme_io_md": false, 00:08:45.656 "write_zeroes": true, 00:08:45.656 "zcopy": true, 00:08:45.656 "get_zone_info": false, 00:08:45.656 "zone_management": false, 00:08:45.656 "zone_append": false, 00:08:45.656 "compare": false, 00:08:45.656 "compare_and_write": false, 00:08:45.656 "abort": true, 00:08:45.656 "seek_hole": false, 00:08:45.656 "seek_data": false, 00:08:45.656 "copy": true, 00:08:45.656 "nvme_iov_md": false 00:08:45.656 }, 00:08:45.656 "memory_domains": [ 00:08:45.656 { 00:08:45.656 "dma_device_id": "system", 00:08:45.656 "dma_device_type": 1 00:08:45.656 }, 00:08:45.656 { 00:08:45.656 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:45.656 "dma_device_type": 2 00:08:45.656 } 00:08:45.656 ], 00:08:45.656 "driver_specific": {} 00:08:45.656 } 00:08:45.656 ] 00:08:45.656 20:04:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.656 20:04:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:45.656 20:04:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:45.656 20:04:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:45.656 20:04:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:08:45.656 20:04:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:45.656 20:04:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:45.656 20:04:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:45.656 20:04:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:45.656 20:04:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:45.656 20:04:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:45.656 20:04:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:45.656 20:04:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:45.656 20:04:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:45.656 20:04:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:45.656 20:04:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:45.656 20:04:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.656 20:04:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.656 20:04:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.656 20:04:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:45.656 "name": "Existed_Raid", 00:08:45.656 "uuid": "60950a21-0189-4be9-8aec-6f61522fc02c", 00:08:45.656 "strip_size_kb": 64, 00:08:45.656 "state": "online", 00:08:45.656 "raid_level": "concat", 00:08:45.656 "superblock": true, 00:08:45.656 "num_base_bdevs": 3, 00:08:45.656 "num_base_bdevs_discovered": 3, 00:08:45.656 "num_base_bdevs_operational": 3, 00:08:45.656 "base_bdevs_list": [ 00:08:45.656 { 00:08:45.656 "name": "BaseBdev1", 00:08:45.656 "uuid": "b8536dc0-be58-4d2c-ae42-0f5561a89b5d", 00:08:45.656 "is_configured": true, 00:08:45.656 "data_offset": 2048, 00:08:45.656 "data_size": 63488 00:08:45.656 }, 00:08:45.656 { 00:08:45.656 "name": "BaseBdev2", 00:08:45.656 "uuid": "69ea2746-0b98-4346-8674-9d4e2d5dff99", 00:08:45.656 "is_configured": true, 00:08:45.656 "data_offset": 2048, 00:08:45.656 "data_size": 63488 00:08:45.656 }, 00:08:45.656 { 00:08:45.656 "name": "BaseBdev3", 00:08:45.656 "uuid": "4c648446-ddd9-4681-ab4d-2ce77f77d92a", 00:08:45.656 "is_configured": true, 00:08:45.656 "data_offset": 2048, 00:08:45.656 "data_size": 63488 00:08:45.656 } 00:08:45.656 ] 00:08:45.656 }' 00:08:45.656 20:04:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:45.656 20:04:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.226 20:04:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:46.227 20:04:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:46.227 20:04:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:46.227 20:04:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:46.227 20:04:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:46.227 20:04:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:46.227 20:04:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:46.227 20:04:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:46.227 20:04:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.227 20:04:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.227 [2024-12-08 20:04:17.915640] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:46.227 20:04:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.227 20:04:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:46.227 "name": "Existed_Raid", 00:08:46.227 "aliases": [ 00:08:46.227 "60950a21-0189-4be9-8aec-6f61522fc02c" 00:08:46.227 ], 00:08:46.227 "product_name": "Raid Volume", 00:08:46.227 "block_size": 512, 00:08:46.227 "num_blocks": 190464, 00:08:46.227 "uuid": "60950a21-0189-4be9-8aec-6f61522fc02c", 00:08:46.227 "assigned_rate_limits": { 00:08:46.227 "rw_ios_per_sec": 0, 00:08:46.227 "rw_mbytes_per_sec": 0, 00:08:46.227 "r_mbytes_per_sec": 0, 00:08:46.227 "w_mbytes_per_sec": 0 00:08:46.227 }, 00:08:46.227 "claimed": false, 00:08:46.227 "zoned": false, 00:08:46.227 "supported_io_types": { 00:08:46.227 "read": true, 00:08:46.227 "write": true, 00:08:46.227 "unmap": true, 00:08:46.227 "flush": true, 00:08:46.227 "reset": true, 00:08:46.227 "nvme_admin": false, 00:08:46.227 "nvme_io": false, 00:08:46.227 "nvme_io_md": false, 00:08:46.227 "write_zeroes": true, 00:08:46.227 "zcopy": false, 00:08:46.227 "get_zone_info": false, 00:08:46.227 "zone_management": false, 00:08:46.227 "zone_append": false, 00:08:46.227 "compare": false, 00:08:46.227 "compare_and_write": false, 00:08:46.227 "abort": false, 00:08:46.227 "seek_hole": false, 00:08:46.227 "seek_data": false, 00:08:46.227 "copy": false, 00:08:46.227 "nvme_iov_md": false 00:08:46.227 }, 00:08:46.227 "memory_domains": [ 00:08:46.227 { 00:08:46.227 "dma_device_id": "system", 00:08:46.227 "dma_device_type": 1 00:08:46.227 }, 00:08:46.227 { 00:08:46.227 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:46.227 "dma_device_type": 2 00:08:46.227 }, 00:08:46.227 { 00:08:46.227 "dma_device_id": "system", 00:08:46.227 "dma_device_type": 1 00:08:46.227 }, 00:08:46.227 { 00:08:46.227 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:46.227 "dma_device_type": 2 00:08:46.227 }, 00:08:46.227 { 00:08:46.227 "dma_device_id": "system", 00:08:46.227 "dma_device_type": 1 00:08:46.227 }, 00:08:46.227 { 00:08:46.227 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:46.227 "dma_device_type": 2 00:08:46.227 } 00:08:46.227 ], 00:08:46.227 "driver_specific": { 00:08:46.227 "raid": { 00:08:46.227 "uuid": "60950a21-0189-4be9-8aec-6f61522fc02c", 00:08:46.227 "strip_size_kb": 64, 00:08:46.227 "state": "online", 00:08:46.227 "raid_level": "concat", 00:08:46.227 "superblock": true, 00:08:46.227 "num_base_bdevs": 3, 00:08:46.227 "num_base_bdevs_discovered": 3, 00:08:46.227 "num_base_bdevs_operational": 3, 00:08:46.227 "base_bdevs_list": [ 00:08:46.227 { 00:08:46.227 "name": "BaseBdev1", 00:08:46.227 "uuid": "b8536dc0-be58-4d2c-ae42-0f5561a89b5d", 00:08:46.227 "is_configured": true, 00:08:46.227 "data_offset": 2048, 00:08:46.227 "data_size": 63488 00:08:46.227 }, 00:08:46.227 { 00:08:46.227 "name": "BaseBdev2", 00:08:46.227 "uuid": "69ea2746-0b98-4346-8674-9d4e2d5dff99", 00:08:46.227 "is_configured": true, 00:08:46.227 "data_offset": 2048, 00:08:46.227 "data_size": 63488 00:08:46.227 }, 00:08:46.227 { 00:08:46.227 "name": "BaseBdev3", 00:08:46.227 "uuid": "4c648446-ddd9-4681-ab4d-2ce77f77d92a", 00:08:46.227 "is_configured": true, 00:08:46.227 "data_offset": 2048, 00:08:46.227 "data_size": 63488 00:08:46.227 } 00:08:46.227 ] 00:08:46.227 } 00:08:46.227 } 00:08:46.227 }' 00:08:46.227 20:04:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:46.227 20:04:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:46.227 BaseBdev2 00:08:46.227 BaseBdev3' 00:08:46.227 20:04:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:46.227 20:04:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:46.227 20:04:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:46.227 20:04:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:46.227 20:04:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:46.227 20:04:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.227 20:04:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.227 20:04:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.227 20:04:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:46.227 20:04:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:46.227 20:04:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:46.227 20:04:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:46.227 20:04:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.227 20:04:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:46.227 20:04:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.227 20:04:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.227 20:04:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:46.227 20:04:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:46.227 20:04:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:46.227 20:04:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:46.227 20:04:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:46.227 20:04:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.227 20:04:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.227 20:04:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.488 20:04:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:46.488 20:04:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:46.488 20:04:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:46.488 20:04:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.488 20:04:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.488 [2024-12-08 20:04:18.214885] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:46.488 [2024-12-08 20:04:18.214915] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:46.488 [2024-12-08 20:04:18.214988] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:46.488 20:04:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.488 20:04:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:46.488 20:04:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:08:46.488 20:04:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:46.488 20:04:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:08:46.488 20:04:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:46.488 20:04:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:08:46.488 20:04:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:46.488 20:04:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:46.488 20:04:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:46.488 20:04:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:46.488 20:04:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:46.488 20:04:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:46.488 20:04:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:46.488 20:04:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:46.488 20:04:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:46.488 20:04:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:46.488 20:04:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.488 20:04:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.488 20:04:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:46.488 20:04:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.488 20:04:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:46.488 "name": "Existed_Raid", 00:08:46.488 "uuid": "60950a21-0189-4be9-8aec-6f61522fc02c", 00:08:46.488 "strip_size_kb": 64, 00:08:46.488 "state": "offline", 00:08:46.488 "raid_level": "concat", 00:08:46.488 "superblock": true, 00:08:46.488 "num_base_bdevs": 3, 00:08:46.488 "num_base_bdevs_discovered": 2, 00:08:46.488 "num_base_bdevs_operational": 2, 00:08:46.488 "base_bdevs_list": [ 00:08:46.488 { 00:08:46.488 "name": null, 00:08:46.488 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:46.488 "is_configured": false, 00:08:46.488 "data_offset": 0, 00:08:46.488 "data_size": 63488 00:08:46.488 }, 00:08:46.488 { 00:08:46.488 "name": "BaseBdev2", 00:08:46.488 "uuid": "69ea2746-0b98-4346-8674-9d4e2d5dff99", 00:08:46.488 "is_configured": true, 00:08:46.488 "data_offset": 2048, 00:08:46.488 "data_size": 63488 00:08:46.488 }, 00:08:46.488 { 00:08:46.488 "name": "BaseBdev3", 00:08:46.488 "uuid": "4c648446-ddd9-4681-ab4d-2ce77f77d92a", 00:08:46.489 "is_configured": true, 00:08:46.489 "data_offset": 2048, 00:08:46.489 "data_size": 63488 00:08:46.489 } 00:08:46.489 ] 00:08:46.489 }' 00:08:46.489 20:04:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:46.489 20:04:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.057 20:04:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:47.057 20:04:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:47.057 20:04:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:47.057 20:04:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:47.057 20:04:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.057 20:04:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.057 20:04:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.057 20:04:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:47.057 20:04:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:47.057 20:04:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:47.057 20:04:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.057 20:04:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.057 [2024-12-08 20:04:18.842289] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:47.057 20:04:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.057 20:04:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:47.057 20:04:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:47.057 20:04:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:47.057 20:04:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.057 20:04:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.057 20:04:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:47.057 20:04:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.057 20:04:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:47.057 20:04:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:47.057 20:04:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:47.057 20:04:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.057 20:04:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.057 [2024-12-08 20:04:18.990811] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:47.057 [2024-12-08 20:04:18.990875] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:47.317 20:04:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.317 20:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:47.317 20:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:47.317 20:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:47.317 20:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:47.317 20:04:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.317 20:04:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.317 20:04:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.317 20:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:47.317 20:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:47.317 20:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:47.317 20:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:47.317 20:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:47.317 20:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:47.317 20:04:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.317 20:04:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.317 BaseBdev2 00:08:47.317 20:04:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.317 20:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:47.317 20:04:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:47.317 20:04:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:47.317 20:04:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:47.317 20:04:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:47.317 20:04:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:47.317 20:04:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:47.317 20:04:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.317 20:04:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.317 20:04:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.318 20:04:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:47.318 20:04:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.318 20:04:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.318 [ 00:08:47.318 { 00:08:47.318 "name": "BaseBdev2", 00:08:47.318 "aliases": [ 00:08:47.318 "68b8d330-1949-4d03-85ef-8234711cdd7d" 00:08:47.318 ], 00:08:47.318 "product_name": "Malloc disk", 00:08:47.318 "block_size": 512, 00:08:47.318 "num_blocks": 65536, 00:08:47.318 "uuid": "68b8d330-1949-4d03-85ef-8234711cdd7d", 00:08:47.318 "assigned_rate_limits": { 00:08:47.318 "rw_ios_per_sec": 0, 00:08:47.318 "rw_mbytes_per_sec": 0, 00:08:47.318 "r_mbytes_per_sec": 0, 00:08:47.318 "w_mbytes_per_sec": 0 00:08:47.318 }, 00:08:47.318 "claimed": false, 00:08:47.318 "zoned": false, 00:08:47.318 "supported_io_types": { 00:08:47.318 "read": true, 00:08:47.318 "write": true, 00:08:47.318 "unmap": true, 00:08:47.318 "flush": true, 00:08:47.318 "reset": true, 00:08:47.318 "nvme_admin": false, 00:08:47.318 "nvme_io": false, 00:08:47.318 "nvme_io_md": false, 00:08:47.318 "write_zeroes": true, 00:08:47.318 "zcopy": true, 00:08:47.318 "get_zone_info": false, 00:08:47.318 "zone_management": false, 00:08:47.318 "zone_append": false, 00:08:47.318 "compare": false, 00:08:47.318 "compare_and_write": false, 00:08:47.318 "abort": true, 00:08:47.318 "seek_hole": false, 00:08:47.318 "seek_data": false, 00:08:47.318 "copy": true, 00:08:47.318 "nvme_iov_md": false 00:08:47.318 }, 00:08:47.318 "memory_domains": [ 00:08:47.318 { 00:08:47.318 "dma_device_id": "system", 00:08:47.318 "dma_device_type": 1 00:08:47.318 }, 00:08:47.318 { 00:08:47.318 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:47.318 "dma_device_type": 2 00:08:47.318 } 00:08:47.318 ], 00:08:47.318 "driver_specific": {} 00:08:47.318 } 00:08:47.318 ] 00:08:47.318 20:04:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.318 20:04:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:47.318 20:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:47.318 20:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:47.318 20:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:47.318 20:04:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.318 20:04:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.318 BaseBdev3 00:08:47.318 20:04:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.318 20:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:47.318 20:04:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:47.318 20:04:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:47.318 20:04:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:47.318 20:04:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:47.318 20:04:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:47.318 20:04:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:47.318 20:04:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.318 20:04:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.318 20:04:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.318 20:04:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:47.318 20:04:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.318 20:04:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.318 [ 00:08:47.318 { 00:08:47.318 "name": "BaseBdev3", 00:08:47.318 "aliases": [ 00:08:47.318 "16a08918-01c3-4059-92a5-db813cb76c71" 00:08:47.318 ], 00:08:47.318 "product_name": "Malloc disk", 00:08:47.318 "block_size": 512, 00:08:47.578 "num_blocks": 65536, 00:08:47.578 "uuid": "16a08918-01c3-4059-92a5-db813cb76c71", 00:08:47.578 "assigned_rate_limits": { 00:08:47.578 "rw_ios_per_sec": 0, 00:08:47.578 "rw_mbytes_per_sec": 0, 00:08:47.578 "r_mbytes_per_sec": 0, 00:08:47.578 "w_mbytes_per_sec": 0 00:08:47.578 }, 00:08:47.578 "claimed": false, 00:08:47.578 "zoned": false, 00:08:47.578 "supported_io_types": { 00:08:47.578 "read": true, 00:08:47.578 "write": true, 00:08:47.578 "unmap": true, 00:08:47.578 "flush": true, 00:08:47.578 "reset": true, 00:08:47.578 "nvme_admin": false, 00:08:47.578 "nvme_io": false, 00:08:47.578 "nvme_io_md": false, 00:08:47.578 "write_zeroes": true, 00:08:47.578 "zcopy": true, 00:08:47.578 "get_zone_info": false, 00:08:47.578 "zone_management": false, 00:08:47.578 "zone_append": false, 00:08:47.578 "compare": false, 00:08:47.578 "compare_and_write": false, 00:08:47.578 "abort": true, 00:08:47.578 "seek_hole": false, 00:08:47.578 "seek_data": false, 00:08:47.578 "copy": true, 00:08:47.578 "nvme_iov_md": false 00:08:47.578 }, 00:08:47.578 "memory_domains": [ 00:08:47.578 { 00:08:47.578 "dma_device_id": "system", 00:08:47.578 "dma_device_type": 1 00:08:47.578 }, 00:08:47.578 { 00:08:47.578 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:47.578 "dma_device_type": 2 00:08:47.578 } 00:08:47.578 ], 00:08:47.578 "driver_specific": {} 00:08:47.578 } 00:08:47.578 ] 00:08:47.578 20:04:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.578 20:04:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:47.578 20:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:47.578 20:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:47.578 20:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:47.578 20:04:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.578 20:04:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.578 [2024-12-08 20:04:19.308827] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:47.578 [2024-12-08 20:04:19.308913] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:47.578 [2024-12-08 20:04:19.308969] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:47.578 [2024-12-08 20:04:19.310812] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:47.578 20:04:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.578 20:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:47.578 20:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:47.578 20:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:47.578 20:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:47.578 20:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:47.578 20:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:47.578 20:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:47.578 20:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:47.578 20:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:47.578 20:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:47.578 20:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:47.579 20:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:47.579 20:04:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.579 20:04:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.579 20:04:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.579 20:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:47.579 "name": "Existed_Raid", 00:08:47.579 "uuid": "de9ed8cd-8f2c-484e-b3b9-316184eb225b", 00:08:47.579 "strip_size_kb": 64, 00:08:47.579 "state": "configuring", 00:08:47.579 "raid_level": "concat", 00:08:47.579 "superblock": true, 00:08:47.579 "num_base_bdevs": 3, 00:08:47.579 "num_base_bdevs_discovered": 2, 00:08:47.579 "num_base_bdevs_operational": 3, 00:08:47.579 "base_bdevs_list": [ 00:08:47.579 { 00:08:47.579 "name": "BaseBdev1", 00:08:47.579 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:47.579 "is_configured": false, 00:08:47.579 "data_offset": 0, 00:08:47.579 "data_size": 0 00:08:47.579 }, 00:08:47.579 { 00:08:47.579 "name": "BaseBdev2", 00:08:47.579 "uuid": "68b8d330-1949-4d03-85ef-8234711cdd7d", 00:08:47.579 "is_configured": true, 00:08:47.579 "data_offset": 2048, 00:08:47.579 "data_size": 63488 00:08:47.579 }, 00:08:47.579 { 00:08:47.579 "name": "BaseBdev3", 00:08:47.579 "uuid": "16a08918-01c3-4059-92a5-db813cb76c71", 00:08:47.579 "is_configured": true, 00:08:47.579 "data_offset": 2048, 00:08:47.579 "data_size": 63488 00:08:47.579 } 00:08:47.579 ] 00:08:47.579 }' 00:08:47.579 20:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:47.579 20:04:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.839 20:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:47.839 20:04:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.839 20:04:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.839 [2024-12-08 20:04:19.788084] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:47.839 20:04:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.839 20:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:47.839 20:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:47.839 20:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:47.839 20:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:47.839 20:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:47.839 20:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:47.839 20:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:47.839 20:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:47.839 20:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:47.839 20:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:47.839 20:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:47.839 20:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:47.839 20:04:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.839 20:04:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.839 20:04:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.099 20:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:48.099 "name": "Existed_Raid", 00:08:48.099 "uuid": "de9ed8cd-8f2c-484e-b3b9-316184eb225b", 00:08:48.099 "strip_size_kb": 64, 00:08:48.099 "state": "configuring", 00:08:48.099 "raid_level": "concat", 00:08:48.099 "superblock": true, 00:08:48.099 "num_base_bdevs": 3, 00:08:48.099 "num_base_bdevs_discovered": 1, 00:08:48.099 "num_base_bdevs_operational": 3, 00:08:48.099 "base_bdevs_list": [ 00:08:48.099 { 00:08:48.099 "name": "BaseBdev1", 00:08:48.099 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:48.099 "is_configured": false, 00:08:48.099 "data_offset": 0, 00:08:48.099 "data_size": 0 00:08:48.099 }, 00:08:48.099 { 00:08:48.099 "name": null, 00:08:48.099 "uuid": "68b8d330-1949-4d03-85ef-8234711cdd7d", 00:08:48.099 "is_configured": false, 00:08:48.099 "data_offset": 0, 00:08:48.099 "data_size": 63488 00:08:48.099 }, 00:08:48.099 { 00:08:48.099 "name": "BaseBdev3", 00:08:48.099 "uuid": "16a08918-01c3-4059-92a5-db813cb76c71", 00:08:48.099 "is_configured": true, 00:08:48.099 "data_offset": 2048, 00:08:48.099 "data_size": 63488 00:08:48.099 } 00:08:48.099 ] 00:08:48.099 }' 00:08:48.099 20:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:48.099 20:04:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.360 20:04:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:48.360 20:04:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.360 20:04:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.360 20:04:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:48.360 20:04:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.360 20:04:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:48.360 20:04:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:48.360 20:04:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.360 20:04:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.360 [2024-12-08 20:04:20.308095] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:48.360 BaseBdev1 00:08:48.360 20:04:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.360 20:04:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:48.360 20:04:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:48.360 20:04:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:48.360 20:04:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:48.360 20:04:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:48.360 20:04:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:48.360 20:04:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:48.360 20:04:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.360 20:04:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.360 20:04:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.360 20:04:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:48.360 20:04:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.360 20:04:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.360 [ 00:08:48.360 { 00:08:48.360 "name": "BaseBdev1", 00:08:48.360 "aliases": [ 00:08:48.360 "aa5ce0f2-80ed-4bad-b51d-056b5183528f" 00:08:48.360 ], 00:08:48.360 "product_name": "Malloc disk", 00:08:48.360 "block_size": 512, 00:08:48.360 "num_blocks": 65536, 00:08:48.360 "uuid": "aa5ce0f2-80ed-4bad-b51d-056b5183528f", 00:08:48.360 "assigned_rate_limits": { 00:08:48.360 "rw_ios_per_sec": 0, 00:08:48.360 "rw_mbytes_per_sec": 0, 00:08:48.360 "r_mbytes_per_sec": 0, 00:08:48.620 "w_mbytes_per_sec": 0 00:08:48.620 }, 00:08:48.620 "claimed": true, 00:08:48.620 "claim_type": "exclusive_write", 00:08:48.620 "zoned": false, 00:08:48.620 "supported_io_types": { 00:08:48.620 "read": true, 00:08:48.620 "write": true, 00:08:48.620 "unmap": true, 00:08:48.620 "flush": true, 00:08:48.620 "reset": true, 00:08:48.620 "nvme_admin": false, 00:08:48.620 "nvme_io": false, 00:08:48.620 "nvme_io_md": false, 00:08:48.620 "write_zeroes": true, 00:08:48.620 "zcopy": true, 00:08:48.620 "get_zone_info": false, 00:08:48.620 "zone_management": false, 00:08:48.620 "zone_append": false, 00:08:48.620 "compare": false, 00:08:48.620 "compare_and_write": false, 00:08:48.620 "abort": true, 00:08:48.620 "seek_hole": false, 00:08:48.620 "seek_data": false, 00:08:48.620 "copy": true, 00:08:48.620 "nvme_iov_md": false 00:08:48.620 }, 00:08:48.620 "memory_domains": [ 00:08:48.620 { 00:08:48.620 "dma_device_id": "system", 00:08:48.620 "dma_device_type": 1 00:08:48.620 }, 00:08:48.620 { 00:08:48.620 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:48.620 "dma_device_type": 2 00:08:48.620 } 00:08:48.620 ], 00:08:48.620 "driver_specific": {} 00:08:48.620 } 00:08:48.620 ] 00:08:48.620 20:04:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.620 20:04:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:48.620 20:04:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:48.620 20:04:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:48.620 20:04:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:48.620 20:04:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:48.620 20:04:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:48.620 20:04:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:48.620 20:04:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:48.620 20:04:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:48.620 20:04:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:48.620 20:04:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:48.620 20:04:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:48.620 20:04:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.620 20:04:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:48.620 20:04:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.620 20:04:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.620 20:04:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:48.620 "name": "Existed_Raid", 00:08:48.620 "uuid": "de9ed8cd-8f2c-484e-b3b9-316184eb225b", 00:08:48.620 "strip_size_kb": 64, 00:08:48.620 "state": "configuring", 00:08:48.620 "raid_level": "concat", 00:08:48.620 "superblock": true, 00:08:48.620 "num_base_bdevs": 3, 00:08:48.620 "num_base_bdevs_discovered": 2, 00:08:48.620 "num_base_bdevs_operational": 3, 00:08:48.620 "base_bdevs_list": [ 00:08:48.620 { 00:08:48.620 "name": "BaseBdev1", 00:08:48.620 "uuid": "aa5ce0f2-80ed-4bad-b51d-056b5183528f", 00:08:48.620 "is_configured": true, 00:08:48.620 "data_offset": 2048, 00:08:48.620 "data_size": 63488 00:08:48.620 }, 00:08:48.620 { 00:08:48.620 "name": null, 00:08:48.620 "uuid": "68b8d330-1949-4d03-85ef-8234711cdd7d", 00:08:48.620 "is_configured": false, 00:08:48.620 "data_offset": 0, 00:08:48.620 "data_size": 63488 00:08:48.620 }, 00:08:48.620 { 00:08:48.620 "name": "BaseBdev3", 00:08:48.620 "uuid": "16a08918-01c3-4059-92a5-db813cb76c71", 00:08:48.620 "is_configured": true, 00:08:48.620 "data_offset": 2048, 00:08:48.620 "data_size": 63488 00:08:48.620 } 00:08:48.620 ] 00:08:48.620 }' 00:08:48.620 20:04:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:48.620 20:04:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.879 20:04:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:48.879 20:04:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:48.879 20:04:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.879 20:04:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.879 20:04:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.879 20:04:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:48.879 20:04:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:48.879 20:04:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.879 20:04:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.879 [2024-12-08 20:04:20.851287] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:48.879 20:04:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.879 20:04:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:48.879 20:04:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:48.879 20:04:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:49.138 20:04:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:49.138 20:04:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:49.138 20:04:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:49.138 20:04:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:49.138 20:04:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:49.138 20:04:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:49.138 20:04:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:49.138 20:04:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:49.138 20:04:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:49.138 20:04:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.138 20:04:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.138 20:04:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.138 20:04:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:49.138 "name": "Existed_Raid", 00:08:49.138 "uuid": "de9ed8cd-8f2c-484e-b3b9-316184eb225b", 00:08:49.138 "strip_size_kb": 64, 00:08:49.138 "state": "configuring", 00:08:49.138 "raid_level": "concat", 00:08:49.138 "superblock": true, 00:08:49.138 "num_base_bdevs": 3, 00:08:49.138 "num_base_bdevs_discovered": 1, 00:08:49.138 "num_base_bdevs_operational": 3, 00:08:49.138 "base_bdevs_list": [ 00:08:49.138 { 00:08:49.138 "name": "BaseBdev1", 00:08:49.138 "uuid": "aa5ce0f2-80ed-4bad-b51d-056b5183528f", 00:08:49.138 "is_configured": true, 00:08:49.138 "data_offset": 2048, 00:08:49.138 "data_size": 63488 00:08:49.138 }, 00:08:49.139 { 00:08:49.139 "name": null, 00:08:49.139 "uuid": "68b8d330-1949-4d03-85ef-8234711cdd7d", 00:08:49.139 "is_configured": false, 00:08:49.139 "data_offset": 0, 00:08:49.139 "data_size": 63488 00:08:49.139 }, 00:08:49.139 { 00:08:49.139 "name": null, 00:08:49.139 "uuid": "16a08918-01c3-4059-92a5-db813cb76c71", 00:08:49.139 "is_configured": false, 00:08:49.139 "data_offset": 0, 00:08:49.139 "data_size": 63488 00:08:49.139 } 00:08:49.139 ] 00:08:49.139 }' 00:08:49.139 20:04:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:49.139 20:04:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.398 20:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:49.398 20:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:49.398 20:04:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.398 20:04:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.398 20:04:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.398 20:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:49.398 20:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:49.398 20:04:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.398 20:04:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.398 [2024-12-08 20:04:21.294607] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:49.398 20:04:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.398 20:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:49.399 20:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:49.399 20:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:49.399 20:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:49.399 20:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:49.399 20:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:49.399 20:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:49.399 20:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:49.399 20:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:49.399 20:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:49.399 20:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:49.399 20:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:49.399 20:04:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.399 20:04:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.399 20:04:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.399 20:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:49.399 "name": "Existed_Raid", 00:08:49.399 "uuid": "de9ed8cd-8f2c-484e-b3b9-316184eb225b", 00:08:49.399 "strip_size_kb": 64, 00:08:49.399 "state": "configuring", 00:08:49.399 "raid_level": "concat", 00:08:49.399 "superblock": true, 00:08:49.399 "num_base_bdevs": 3, 00:08:49.399 "num_base_bdevs_discovered": 2, 00:08:49.399 "num_base_bdevs_operational": 3, 00:08:49.399 "base_bdevs_list": [ 00:08:49.399 { 00:08:49.399 "name": "BaseBdev1", 00:08:49.399 "uuid": "aa5ce0f2-80ed-4bad-b51d-056b5183528f", 00:08:49.399 "is_configured": true, 00:08:49.399 "data_offset": 2048, 00:08:49.399 "data_size": 63488 00:08:49.399 }, 00:08:49.399 { 00:08:49.399 "name": null, 00:08:49.399 "uuid": "68b8d330-1949-4d03-85ef-8234711cdd7d", 00:08:49.399 "is_configured": false, 00:08:49.399 "data_offset": 0, 00:08:49.399 "data_size": 63488 00:08:49.399 }, 00:08:49.399 { 00:08:49.399 "name": "BaseBdev3", 00:08:49.399 "uuid": "16a08918-01c3-4059-92a5-db813cb76c71", 00:08:49.399 "is_configured": true, 00:08:49.399 "data_offset": 2048, 00:08:49.399 "data_size": 63488 00:08:49.399 } 00:08:49.399 ] 00:08:49.399 }' 00:08:49.399 20:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:49.399 20:04:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.967 20:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:49.967 20:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:49.967 20:04:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.967 20:04:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.967 20:04:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.967 20:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:49.967 20:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:49.967 20:04:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.967 20:04:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.967 [2024-12-08 20:04:21.801739] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:49.967 20:04:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.967 20:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:49.967 20:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:49.967 20:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:49.967 20:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:49.967 20:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:49.967 20:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:49.967 20:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:49.967 20:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:49.967 20:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:49.967 20:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:49.967 20:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:49.967 20:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:49.967 20:04:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.967 20:04:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.967 20:04:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.225 20:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:50.225 "name": "Existed_Raid", 00:08:50.225 "uuid": "de9ed8cd-8f2c-484e-b3b9-316184eb225b", 00:08:50.225 "strip_size_kb": 64, 00:08:50.225 "state": "configuring", 00:08:50.225 "raid_level": "concat", 00:08:50.225 "superblock": true, 00:08:50.225 "num_base_bdevs": 3, 00:08:50.225 "num_base_bdevs_discovered": 1, 00:08:50.225 "num_base_bdevs_operational": 3, 00:08:50.225 "base_bdevs_list": [ 00:08:50.225 { 00:08:50.225 "name": null, 00:08:50.225 "uuid": "aa5ce0f2-80ed-4bad-b51d-056b5183528f", 00:08:50.225 "is_configured": false, 00:08:50.225 "data_offset": 0, 00:08:50.225 "data_size": 63488 00:08:50.225 }, 00:08:50.225 { 00:08:50.225 "name": null, 00:08:50.225 "uuid": "68b8d330-1949-4d03-85ef-8234711cdd7d", 00:08:50.225 "is_configured": false, 00:08:50.225 "data_offset": 0, 00:08:50.225 "data_size": 63488 00:08:50.225 }, 00:08:50.225 { 00:08:50.225 "name": "BaseBdev3", 00:08:50.225 "uuid": "16a08918-01c3-4059-92a5-db813cb76c71", 00:08:50.225 "is_configured": true, 00:08:50.225 "data_offset": 2048, 00:08:50.225 "data_size": 63488 00:08:50.225 } 00:08:50.225 ] 00:08:50.225 }' 00:08:50.225 20:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:50.225 20:04:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.484 20:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:50.484 20:04:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.484 20:04:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.484 20:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:50.484 20:04:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.484 20:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:50.484 20:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:50.484 20:04:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.484 20:04:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.484 [2024-12-08 20:04:22.368197] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:50.484 20:04:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.484 20:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:50.484 20:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:50.484 20:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:50.484 20:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:50.484 20:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:50.484 20:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:50.484 20:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:50.484 20:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:50.484 20:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:50.484 20:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:50.484 20:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:50.484 20:04:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.485 20:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:50.485 20:04:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.485 20:04:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.485 20:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:50.485 "name": "Existed_Raid", 00:08:50.485 "uuid": "de9ed8cd-8f2c-484e-b3b9-316184eb225b", 00:08:50.485 "strip_size_kb": 64, 00:08:50.485 "state": "configuring", 00:08:50.485 "raid_level": "concat", 00:08:50.485 "superblock": true, 00:08:50.485 "num_base_bdevs": 3, 00:08:50.485 "num_base_bdevs_discovered": 2, 00:08:50.485 "num_base_bdevs_operational": 3, 00:08:50.485 "base_bdevs_list": [ 00:08:50.485 { 00:08:50.485 "name": null, 00:08:50.485 "uuid": "aa5ce0f2-80ed-4bad-b51d-056b5183528f", 00:08:50.485 "is_configured": false, 00:08:50.485 "data_offset": 0, 00:08:50.485 "data_size": 63488 00:08:50.485 }, 00:08:50.485 { 00:08:50.485 "name": "BaseBdev2", 00:08:50.485 "uuid": "68b8d330-1949-4d03-85ef-8234711cdd7d", 00:08:50.485 "is_configured": true, 00:08:50.485 "data_offset": 2048, 00:08:50.485 "data_size": 63488 00:08:50.485 }, 00:08:50.485 { 00:08:50.485 "name": "BaseBdev3", 00:08:50.485 "uuid": "16a08918-01c3-4059-92a5-db813cb76c71", 00:08:50.485 "is_configured": true, 00:08:50.485 "data_offset": 2048, 00:08:50.485 "data_size": 63488 00:08:50.485 } 00:08:50.485 ] 00:08:50.485 }' 00:08:50.485 20:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:50.485 20:04:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.078 20:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:51.078 20:04:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.078 20:04:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.078 20:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:51.078 20:04:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.078 20:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:51.078 20:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:51.078 20:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:51.078 20:04:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.078 20:04:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.078 20:04:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.078 20:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u aa5ce0f2-80ed-4bad-b51d-056b5183528f 00:08:51.078 20:04:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.078 20:04:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.078 [2024-12-08 20:04:22.919878] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:51.078 [2024-12-08 20:04:22.920149] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:51.078 [2024-12-08 20:04:22.920168] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:51.078 [2024-12-08 20:04:22.920464] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:51.078 [2024-12-08 20:04:22.920627] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:51.078 [2024-12-08 20:04:22.920638] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:08:51.078 NewBaseBdev 00:08:51.078 [2024-12-08 20:04:22.920779] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:51.078 20:04:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.078 20:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:51.078 20:04:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:08:51.078 20:04:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:51.078 20:04:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:51.078 20:04:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:51.078 20:04:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:51.078 20:04:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:51.078 20:04:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.078 20:04:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.078 20:04:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.078 20:04:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:51.078 20:04:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.078 20:04:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.078 [ 00:08:51.078 { 00:08:51.078 "name": "NewBaseBdev", 00:08:51.078 "aliases": [ 00:08:51.078 "aa5ce0f2-80ed-4bad-b51d-056b5183528f" 00:08:51.078 ], 00:08:51.078 "product_name": "Malloc disk", 00:08:51.078 "block_size": 512, 00:08:51.078 "num_blocks": 65536, 00:08:51.078 "uuid": "aa5ce0f2-80ed-4bad-b51d-056b5183528f", 00:08:51.078 "assigned_rate_limits": { 00:08:51.078 "rw_ios_per_sec": 0, 00:08:51.078 "rw_mbytes_per_sec": 0, 00:08:51.078 "r_mbytes_per_sec": 0, 00:08:51.078 "w_mbytes_per_sec": 0 00:08:51.078 }, 00:08:51.078 "claimed": true, 00:08:51.078 "claim_type": "exclusive_write", 00:08:51.078 "zoned": false, 00:08:51.078 "supported_io_types": { 00:08:51.078 "read": true, 00:08:51.078 "write": true, 00:08:51.078 "unmap": true, 00:08:51.078 "flush": true, 00:08:51.078 "reset": true, 00:08:51.078 "nvme_admin": false, 00:08:51.078 "nvme_io": false, 00:08:51.078 "nvme_io_md": false, 00:08:51.078 "write_zeroes": true, 00:08:51.078 "zcopy": true, 00:08:51.078 "get_zone_info": false, 00:08:51.078 "zone_management": false, 00:08:51.078 "zone_append": false, 00:08:51.078 "compare": false, 00:08:51.078 "compare_and_write": false, 00:08:51.078 "abort": true, 00:08:51.078 "seek_hole": false, 00:08:51.078 "seek_data": false, 00:08:51.078 "copy": true, 00:08:51.078 "nvme_iov_md": false 00:08:51.078 }, 00:08:51.078 "memory_domains": [ 00:08:51.078 { 00:08:51.078 "dma_device_id": "system", 00:08:51.078 "dma_device_type": 1 00:08:51.078 }, 00:08:51.078 { 00:08:51.078 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:51.078 "dma_device_type": 2 00:08:51.078 } 00:08:51.078 ], 00:08:51.078 "driver_specific": {} 00:08:51.078 } 00:08:51.078 ] 00:08:51.078 20:04:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.078 20:04:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:51.078 20:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:08:51.078 20:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:51.078 20:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:51.078 20:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:51.078 20:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:51.078 20:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:51.078 20:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:51.078 20:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:51.078 20:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:51.078 20:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:51.078 20:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:51.078 20:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:51.078 20:04:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.078 20:04:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.078 20:04:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.078 20:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:51.078 "name": "Existed_Raid", 00:08:51.078 "uuid": "de9ed8cd-8f2c-484e-b3b9-316184eb225b", 00:08:51.078 "strip_size_kb": 64, 00:08:51.078 "state": "online", 00:08:51.078 "raid_level": "concat", 00:08:51.078 "superblock": true, 00:08:51.078 "num_base_bdevs": 3, 00:08:51.078 "num_base_bdevs_discovered": 3, 00:08:51.078 "num_base_bdevs_operational": 3, 00:08:51.078 "base_bdevs_list": [ 00:08:51.078 { 00:08:51.078 "name": "NewBaseBdev", 00:08:51.078 "uuid": "aa5ce0f2-80ed-4bad-b51d-056b5183528f", 00:08:51.078 "is_configured": true, 00:08:51.078 "data_offset": 2048, 00:08:51.078 "data_size": 63488 00:08:51.078 }, 00:08:51.078 { 00:08:51.078 "name": "BaseBdev2", 00:08:51.078 "uuid": "68b8d330-1949-4d03-85ef-8234711cdd7d", 00:08:51.078 "is_configured": true, 00:08:51.078 "data_offset": 2048, 00:08:51.078 "data_size": 63488 00:08:51.078 }, 00:08:51.078 { 00:08:51.078 "name": "BaseBdev3", 00:08:51.078 "uuid": "16a08918-01c3-4059-92a5-db813cb76c71", 00:08:51.078 "is_configured": true, 00:08:51.078 "data_offset": 2048, 00:08:51.078 "data_size": 63488 00:08:51.078 } 00:08:51.078 ] 00:08:51.078 }' 00:08:51.078 20:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:51.078 20:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.656 20:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:51.656 20:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:51.656 20:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:51.657 20:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:51.657 20:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:51.657 20:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:51.657 20:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:51.657 20:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:51.657 20:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.657 20:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.657 [2024-12-08 20:04:23.415557] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:51.657 20:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.657 20:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:51.657 "name": "Existed_Raid", 00:08:51.657 "aliases": [ 00:08:51.657 "de9ed8cd-8f2c-484e-b3b9-316184eb225b" 00:08:51.657 ], 00:08:51.657 "product_name": "Raid Volume", 00:08:51.657 "block_size": 512, 00:08:51.657 "num_blocks": 190464, 00:08:51.657 "uuid": "de9ed8cd-8f2c-484e-b3b9-316184eb225b", 00:08:51.657 "assigned_rate_limits": { 00:08:51.657 "rw_ios_per_sec": 0, 00:08:51.657 "rw_mbytes_per_sec": 0, 00:08:51.657 "r_mbytes_per_sec": 0, 00:08:51.657 "w_mbytes_per_sec": 0 00:08:51.657 }, 00:08:51.657 "claimed": false, 00:08:51.657 "zoned": false, 00:08:51.657 "supported_io_types": { 00:08:51.657 "read": true, 00:08:51.657 "write": true, 00:08:51.657 "unmap": true, 00:08:51.657 "flush": true, 00:08:51.657 "reset": true, 00:08:51.657 "nvme_admin": false, 00:08:51.657 "nvme_io": false, 00:08:51.657 "nvme_io_md": false, 00:08:51.657 "write_zeroes": true, 00:08:51.657 "zcopy": false, 00:08:51.657 "get_zone_info": false, 00:08:51.657 "zone_management": false, 00:08:51.657 "zone_append": false, 00:08:51.657 "compare": false, 00:08:51.657 "compare_and_write": false, 00:08:51.657 "abort": false, 00:08:51.657 "seek_hole": false, 00:08:51.657 "seek_data": false, 00:08:51.657 "copy": false, 00:08:51.657 "nvme_iov_md": false 00:08:51.657 }, 00:08:51.657 "memory_domains": [ 00:08:51.657 { 00:08:51.657 "dma_device_id": "system", 00:08:51.657 "dma_device_type": 1 00:08:51.657 }, 00:08:51.657 { 00:08:51.657 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:51.657 "dma_device_type": 2 00:08:51.657 }, 00:08:51.657 { 00:08:51.657 "dma_device_id": "system", 00:08:51.657 "dma_device_type": 1 00:08:51.657 }, 00:08:51.657 { 00:08:51.657 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:51.657 "dma_device_type": 2 00:08:51.657 }, 00:08:51.657 { 00:08:51.657 "dma_device_id": "system", 00:08:51.657 "dma_device_type": 1 00:08:51.657 }, 00:08:51.657 { 00:08:51.657 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:51.657 "dma_device_type": 2 00:08:51.657 } 00:08:51.657 ], 00:08:51.657 "driver_specific": { 00:08:51.657 "raid": { 00:08:51.657 "uuid": "de9ed8cd-8f2c-484e-b3b9-316184eb225b", 00:08:51.657 "strip_size_kb": 64, 00:08:51.657 "state": "online", 00:08:51.657 "raid_level": "concat", 00:08:51.657 "superblock": true, 00:08:51.657 "num_base_bdevs": 3, 00:08:51.657 "num_base_bdevs_discovered": 3, 00:08:51.657 "num_base_bdevs_operational": 3, 00:08:51.657 "base_bdevs_list": [ 00:08:51.657 { 00:08:51.657 "name": "NewBaseBdev", 00:08:51.657 "uuid": "aa5ce0f2-80ed-4bad-b51d-056b5183528f", 00:08:51.657 "is_configured": true, 00:08:51.657 "data_offset": 2048, 00:08:51.657 "data_size": 63488 00:08:51.657 }, 00:08:51.657 { 00:08:51.657 "name": "BaseBdev2", 00:08:51.657 "uuid": "68b8d330-1949-4d03-85ef-8234711cdd7d", 00:08:51.657 "is_configured": true, 00:08:51.657 "data_offset": 2048, 00:08:51.657 "data_size": 63488 00:08:51.657 }, 00:08:51.657 { 00:08:51.657 "name": "BaseBdev3", 00:08:51.657 "uuid": "16a08918-01c3-4059-92a5-db813cb76c71", 00:08:51.657 "is_configured": true, 00:08:51.657 "data_offset": 2048, 00:08:51.657 "data_size": 63488 00:08:51.657 } 00:08:51.657 ] 00:08:51.657 } 00:08:51.657 } 00:08:51.657 }' 00:08:51.657 20:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:51.657 20:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:51.657 BaseBdev2 00:08:51.657 BaseBdev3' 00:08:51.657 20:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:51.657 20:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:51.657 20:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:51.657 20:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:51.657 20:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.657 20:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.657 20:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:51.657 20:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.657 20:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:51.657 20:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:51.657 20:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:51.657 20:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:51.657 20:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.657 20:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.657 20:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:51.657 20:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.939 20:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:51.939 20:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:51.939 20:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:51.939 20:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:51.939 20:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:51.939 20:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.939 20:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.939 20:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.939 20:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:51.939 20:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:51.939 20:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:51.939 20:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.939 20:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.939 [2024-12-08 20:04:23.674720] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:51.939 [2024-12-08 20:04:23.674793] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:51.939 [2024-12-08 20:04:23.674913] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:51.939 [2024-12-08 20:04:23.675033] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:51.939 [2024-12-08 20:04:23.675100] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:08:51.939 20:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.939 20:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 66062 00:08:51.939 20:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 66062 ']' 00:08:51.939 20:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 66062 00:08:51.939 20:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:08:51.939 20:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:51.939 20:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66062 00:08:51.939 20:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:51.940 20:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:51.940 20:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66062' 00:08:51.940 killing process with pid 66062 00:08:51.940 20:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 66062 00:08:51.940 [2024-12-08 20:04:23.723969] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:51.940 20:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 66062 00:08:52.214 [2024-12-08 20:04:24.023864] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:53.151 20:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:53.151 ************************************ 00:08:53.151 END TEST raid_state_function_test_sb 00:08:53.151 ************************************ 00:08:53.151 00:08:53.151 real 0m10.638s 00:08:53.151 user 0m16.921s 00:08:53.151 sys 0m1.881s 00:08:53.151 20:04:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:53.151 20:04:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.410 20:04:25 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:08:53.410 20:04:25 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:53.410 20:04:25 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:53.410 20:04:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:53.410 ************************************ 00:08:53.410 START TEST raid_superblock_test 00:08:53.410 ************************************ 00:08:53.410 20:04:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 3 00:08:53.410 20:04:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:08:53.410 20:04:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:08:53.410 20:04:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:53.410 20:04:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:53.410 20:04:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:53.410 20:04:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:53.410 20:04:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:53.410 20:04:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:53.410 20:04:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:53.410 20:04:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:53.410 20:04:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:53.410 20:04:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:53.410 20:04:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:53.410 20:04:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:08:53.410 20:04:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:08:53.410 20:04:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:08:53.410 20:04:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=66682 00:08:53.410 20:04:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:53.410 20:04:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 66682 00:08:53.410 20:04:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 66682 ']' 00:08:53.410 20:04:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:53.410 20:04:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:53.410 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:53.410 20:04:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:53.410 20:04:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:53.410 20:04:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.410 [2024-12-08 20:04:25.274359] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:08:53.410 [2024-12-08 20:04:25.274504] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66682 ] 00:08:53.669 [2024-12-08 20:04:25.444906] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:53.669 [2024-12-08 20:04:25.555986] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:53.928 [2024-12-08 20:04:25.755483] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:53.928 [2024-12-08 20:04:25.755546] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:54.188 20:04:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:54.188 20:04:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:08:54.188 20:04:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:08:54.188 20:04:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:54.188 20:04:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:08:54.188 20:04:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:08:54.188 20:04:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:54.188 20:04:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:54.188 20:04:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:54.188 20:04:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:54.188 20:04:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:08:54.188 20:04:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.188 20:04:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.188 malloc1 00:08:54.188 20:04:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.188 20:04:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:54.188 20:04:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.188 20:04:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.188 [2024-12-08 20:04:26.143442] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:54.188 [2024-12-08 20:04:26.143545] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:54.188 [2024-12-08 20:04:26.143584] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:54.188 [2024-12-08 20:04:26.143650] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:54.188 [2024-12-08 20:04:26.145808] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:54.188 [2024-12-08 20:04:26.145880] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:54.188 pt1 00:08:54.188 20:04:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.188 20:04:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:54.188 20:04:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:54.188 20:04:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:08:54.188 20:04:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:08:54.188 20:04:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:54.188 20:04:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:54.188 20:04:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:54.188 20:04:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:54.188 20:04:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:08:54.188 20:04:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.188 20:04:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.448 malloc2 00:08:54.448 20:04:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.448 20:04:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:54.448 20:04:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.448 20:04:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.448 [2024-12-08 20:04:26.203302] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:54.448 [2024-12-08 20:04:26.203354] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:54.448 [2024-12-08 20:04:26.203396] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:54.448 [2024-12-08 20:04:26.203405] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:54.448 [2024-12-08 20:04:26.205537] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:54.448 [2024-12-08 20:04:26.205571] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:54.448 pt2 00:08:54.448 20:04:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.448 20:04:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:54.448 20:04:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:54.448 20:04:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:08:54.448 20:04:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:08:54.448 20:04:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:08:54.448 20:04:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:54.448 20:04:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:54.448 20:04:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:54.448 20:04:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:08:54.448 20:04:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.448 20:04:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.448 malloc3 00:08:54.448 20:04:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.448 20:04:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:54.448 20:04:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.448 20:04:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.448 [2024-12-08 20:04:26.270293] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:54.448 [2024-12-08 20:04:26.270399] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:54.448 [2024-12-08 20:04:26.270438] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:08:54.448 [2024-12-08 20:04:26.270466] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:54.448 [2024-12-08 20:04:26.272655] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:54.448 [2024-12-08 20:04:26.272726] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:54.448 pt3 00:08:54.448 20:04:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.448 20:04:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:54.448 20:04:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:54.448 20:04:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:08:54.448 20:04:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.448 20:04:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.448 [2024-12-08 20:04:26.282328] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:54.448 [2024-12-08 20:04:26.284216] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:54.448 [2024-12-08 20:04:26.284330] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:54.448 [2024-12-08 20:04:26.284517] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:54.448 [2024-12-08 20:04:26.284589] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:54.448 [2024-12-08 20:04:26.284859] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:54.448 [2024-12-08 20:04:26.285072] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:54.449 [2024-12-08 20:04:26.285116] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:08:54.449 [2024-12-08 20:04:26.285346] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:54.449 20:04:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.449 20:04:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:08:54.449 20:04:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:54.449 20:04:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:54.449 20:04:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:54.449 20:04:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:54.449 20:04:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:54.449 20:04:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:54.449 20:04:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:54.449 20:04:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:54.449 20:04:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:54.449 20:04:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:54.449 20:04:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:54.449 20:04:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.449 20:04:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.449 20:04:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.449 20:04:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:54.449 "name": "raid_bdev1", 00:08:54.449 "uuid": "015a503e-a36f-4e8c-a0b5-9e7f356441e5", 00:08:54.449 "strip_size_kb": 64, 00:08:54.449 "state": "online", 00:08:54.449 "raid_level": "concat", 00:08:54.449 "superblock": true, 00:08:54.449 "num_base_bdevs": 3, 00:08:54.449 "num_base_bdevs_discovered": 3, 00:08:54.449 "num_base_bdevs_operational": 3, 00:08:54.449 "base_bdevs_list": [ 00:08:54.449 { 00:08:54.449 "name": "pt1", 00:08:54.449 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:54.449 "is_configured": true, 00:08:54.449 "data_offset": 2048, 00:08:54.449 "data_size": 63488 00:08:54.449 }, 00:08:54.449 { 00:08:54.449 "name": "pt2", 00:08:54.449 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:54.449 "is_configured": true, 00:08:54.449 "data_offset": 2048, 00:08:54.449 "data_size": 63488 00:08:54.449 }, 00:08:54.449 { 00:08:54.449 "name": "pt3", 00:08:54.449 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:54.449 "is_configured": true, 00:08:54.449 "data_offset": 2048, 00:08:54.449 "data_size": 63488 00:08:54.449 } 00:08:54.449 ] 00:08:54.449 }' 00:08:54.449 20:04:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:54.449 20:04:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.019 20:04:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:08:55.019 20:04:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:55.019 20:04:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:55.019 20:04:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:55.019 20:04:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:55.020 20:04:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:55.020 20:04:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:55.020 20:04:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:55.020 20:04:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.020 20:04:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.020 [2024-12-08 20:04:26.709978] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:55.020 20:04:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.020 20:04:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:55.020 "name": "raid_bdev1", 00:08:55.020 "aliases": [ 00:08:55.020 "015a503e-a36f-4e8c-a0b5-9e7f356441e5" 00:08:55.020 ], 00:08:55.020 "product_name": "Raid Volume", 00:08:55.020 "block_size": 512, 00:08:55.020 "num_blocks": 190464, 00:08:55.020 "uuid": "015a503e-a36f-4e8c-a0b5-9e7f356441e5", 00:08:55.020 "assigned_rate_limits": { 00:08:55.020 "rw_ios_per_sec": 0, 00:08:55.020 "rw_mbytes_per_sec": 0, 00:08:55.020 "r_mbytes_per_sec": 0, 00:08:55.020 "w_mbytes_per_sec": 0 00:08:55.020 }, 00:08:55.020 "claimed": false, 00:08:55.020 "zoned": false, 00:08:55.020 "supported_io_types": { 00:08:55.020 "read": true, 00:08:55.020 "write": true, 00:08:55.020 "unmap": true, 00:08:55.020 "flush": true, 00:08:55.020 "reset": true, 00:08:55.020 "nvme_admin": false, 00:08:55.020 "nvme_io": false, 00:08:55.020 "nvme_io_md": false, 00:08:55.020 "write_zeroes": true, 00:08:55.020 "zcopy": false, 00:08:55.020 "get_zone_info": false, 00:08:55.020 "zone_management": false, 00:08:55.020 "zone_append": false, 00:08:55.020 "compare": false, 00:08:55.020 "compare_and_write": false, 00:08:55.020 "abort": false, 00:08:55.020 "seek_hole": false, 00:08:55.020 "seek_data": false, 00:08:55.020 "copy": false, 00:08:55.020 "nvme_iov_md": false 00:08:55.020 }, 00:08:55.020 "memory_domains": [ 00:08:55.020 { 00:08:55.020 "dma_device_id": "system", 00:08:55.020 "dma_device_type": 1 00:08:55.020 }, 00:08:55.020 { 00:08:55.020 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:55.020 "dma_device_type": 2 00:08:55.020 }, 00:08:55.020 { 00:08:55.020 "dma_device_id": "system", 00:08:55.020 "dma_device_type": 1 00:08:55.020 }, 00:08:55.020 { 00:08:55.020 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:55.020 "dma_device_type": 2 00:08:55.020 }, 00:08:55.020 { 00:08:55.020 "dma_device_id": "system", 00:08:55.020 "dma_device_type": 1 00:08:55.020 }, 00:08:55.020 { 00:08:55.020 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:55.020 "dma_device_type": 2 00:08:55.020 } 00:08:55.020 ], 00:08:55.020 "driver_specific": { 00:08:55.020 "raid": { 00:08:55.020 "uuid": "015a503e-a36f-4e8c-a0b5-9e7f356441e5", 00:08:55.020 "strip_size_kb": 64, 00:08:55.020 "state": "online", 00:08:55.020 "raid_level": "concat", 00:08:55.020 "superblock": true, 00:08:55.020 "num_base_bdevs": 3, 00:08:55.020 "num_base_bdevs_discovered": 3, 00:08:55.020 "num_base_bdevs_operational": 3, 00:08:55.020 "base_bdevs_list": [ 00:08:55.020 { 00:08:55.020 "name": "pt1", 00:08:55.020 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:55.020 "is_configured": true, 00:08:55.020 "data_offset": 2048, 00:08:55.020 "data_size": 63488 00:08:55.020 }, 00:08:55.020 { 00:08:55.020 "name": "pt2", 00:08:55.020 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:55.020 "is_configured": true, 00:08:55.020 "data_offset": 2048, 00:08:55.020 "data_size": 63488 00:08:55.020 }, 00:08:55.020 { 00:08:55.020 "name": "pt3", 00:08:55.020 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:55.020 "is_configured": true, 00:08:55.020 "data_offset": 2048, 00:08:55.020 "data_size": 63488 00:08:55.020 } 00:08:55.020 ] 00:08:55.020 } 00:08:55.020 } 00:08:55.020 }' 00:08:55.020 20:04:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:55.020 20:04:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:55.020 pt2 00:08:55.020 pt3' 00:08:55.020 20:04:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:55.020 20:04:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:55.020 20:04:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:55.020 20:04:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:55.020 20:04:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.020 20:04:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.020 20:04:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:55.020 20:04:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.020 20:04:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:55.020 20:04:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:55.020 20:04:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:55.020 20:04:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:55.020 20:04:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.020 20:04:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.020 20:04:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:55.020 20:04:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.020 20:04:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:55.020 20:04:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:55.020 20:04:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:55.020 20:04:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:08:55.020 20:04:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.020 20:04:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.020 20:04:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:55.020 20:04:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.281 20:04:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:55.281 20:04:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:55.281 20:04:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:55.281 20:04:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:08:55.281 20:04:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.281 20:04:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.281 [2024-12-08 20:04:27.009364] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:55.281 20:04:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.281 20:04:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=015a503e-a36f-4e8c-a0b5-9e7f356441e5 00:08:55.281 20:04:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 015a503e-a36f-4e8c-a0b5-9e7f356441e5 ']' 00:08:55.281 20:04:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:55.281 20:04:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.281 20:04:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.281 [2024-12-08 20:04:27.057018] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:55.281 [2024-12-08 20:04:27.057046] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:55.281 [2024-12-08 20:04:27.057119] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:55.281 [2024-12-08 20:04:27.057180] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:55.281 [2024-12-08 20:04:27.057189] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:08:55.281 20:04:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.281 20:04:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:55.281 20:04:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.281 20:04:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.281 20:04:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:08:55.281 20:04:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.281 20:04:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:08:55.281 20:04:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:08:55.281 20:04:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:55.281 20:04:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:08:55.281 20:04:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.281 20:04:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.281 20:04:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.281 20:04:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:55.281 20:04:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:08:55.281 20:04:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.281 20:04:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.281 20:04:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.281 20:04:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:55.281 20:04:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:08:55.281 20:04:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.281 20:04:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.281 20:04:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.281 20:04:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:55.281 20:04:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:08:55.281 20:04:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.281 20:04:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.281 20:04:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.281 20:04:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:08:55.281 20:04:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:55.281 20:04:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:08:55.281 20:04:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:55.281 20:04:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:08:55.281 20:04:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:55.281 20:04:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:08:55.281 20:04:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:55.281 20:04:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:55.282 20:04:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.282 20:04:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.282 [2024-12-08 20:04:27.196816] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:55.282 [2024-12-08 20:04:27.198666] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:55.282 [2024-12-08 20:04:27.198712] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:08:55.282 [2024-12-08 20:04:27.198762] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:55.282 [2024-12-08 20:04:27.198812] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:55.282 [2024-12-08 20:04:27.198830] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:08:55.282 [2024-12-08 20:04:27.198846] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:55.282 [2024-12-08 20:04:27.198855] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:08:55.282 request: 00:08:55.282 { 00:08:55.282 "name": "raid_bdev1", 00:08:55.282 "raid_level": "concat", 00:08:55.282 "base_bdevs": [ 00:08:55.282 "malloc1", 00:08:55.282 "malloc2", 00:08:55.282 "malloc3" 00:08:55.282 ], 00:08:55.282 "strip_size_kb": 64, 00:08:55.282 "superblock": false, 00:08:55.282 "method": "bdev_raid_create", 00:08:55.282 "req_id": 1 00:08:55.282 } 00:08:55.282 Got JSON-RPC error response 00:08:55.282 response: 00:08:55.282 { 00:08:55.282 "code": -17, 00:08:55.282 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:55.282 } 00:08:55.282 20:04:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:55.282 20:04:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:08:55.282 20:04:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:55.282 20:04:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:55.282 20:04:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:55.282 20:04:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:55.282 20:04:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.282 20:04:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.282 20:04:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:08:55.282 20:04:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.542 20:04:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:08:55.542 20:04:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:08:55.542 20:04:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:55.542 20:04:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.542 20:04:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.542 [2024-12-08 20:04:27.264674] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:55.542 [2024-12-08 20:04:27.264739] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:55.542 [2024-12-08 20:04:27.264760] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:08:55.542 [2024-12-08 20:04:27.264769] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:55.542 [2024-12-08 20:04:27.267233] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:55.542 [2024-12-08 20:04:27.267280] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:55.542 [2024-12-08 20:04:27.267388] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:55.542 [2024-12-08 20:04:27.267452] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:55.542 pt1 00:08:55.542 20:04:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.542 20:04:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:08:55.542 20:04:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:55.542 20:04:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:55.542 20:04:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:55.542 20:04:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:55.542 20:04:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:55.542 20:04:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:55.542 20:04:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:55.542 20:04:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:55.542 20:04:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:55.542 20:04:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:55.542 20:04:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.542 20:04:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.542 20:04:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:55.542 20:04:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.542 20:04:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:55.542 "name": "raid_bdev1", 00:08:55.542 "uuid": "015a503e-a36f-4e8c-a0b5-9e7f356441e5", 00:08:55.542 "strip_size_kb": 64, 00:08:55.542 "state": "configuring", 00:08:55.542 "raid_level": "concat", 00:08:55.542 "superblock": true, 00:08:55.542 "num_base_bdevs": 3, 00:08:55.542 "num_base_bdevs_discovered": 1, 00:08:55.542 "num_base_bdevs_operational": 3, 00:08:55.542 "base_bdevs_list": [ 00:08:55.542 { 00:08:55.542 "name": "pt1", 00:08:55.542 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:55.542 "is_configured": true, 00:08:55.542 "data_offset": 2048, 00:08:55.542 "data_size": 63488 00:08:55.542 }, 00:08:55.542 { 00:08:55.542 "name": null, 00:08:55.542 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:55.542 "is_configured": false, 00:08:55.542 "data_offset": 2048, 00:08:55.542 "data_size": 63488 00:08:55.542 }, 00:08:55.542 { 00:08:55.542 "name": null, 00:08:55.542 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:55.542 "is_configured": false, 00:08:55.543 "data_offset": 2048, 00:08:55.543 "data_size": 63488 00:08:55.543 } 00:08:55.543 ] 00:08:55.543 }' 00:08:55.543 20:04:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:55.543 20:04:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.803 20:04:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:08:55.803 20:04:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:55.803 20:04:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.803 20:04:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.803 [2024-12-08 20:04:27.703983] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:55.803 [2024-12-08 20:04:27.704095] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:55.803 [2024-12-08 20:04:27.704142] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:08:55.803 [2024-12-08 20:04:27.704175] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:55.803 [2024-12-08 20:04:27.704681] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:55.803 [2024-12-08 20:04:27.704746] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:55.803 [2024-12-08 20:04:27.704896] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:55.803 [2024-12-08 20:04:27.704979] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:55.803 pt2 00:08:55.803 20:04:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.803 20:04:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:08:55.803 20:04:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.803 20:04:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.803 [2024-12-08 20:04:27.715938] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:08:55.803 20:04:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.803 20:04:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:08:55.803 20:04:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:55.803 20:04:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:55.803 20:04:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:55.803 20:04:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:55.803 20:04:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:55.803 20:04:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:55.803 20:04:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:55.803 20:04:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:55.803 20:04:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:55.803 20:04:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:55.803 20:04:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:55.803 20:04:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.803 20:04:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.803 20:04:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.803 20:04:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:55.803 "name": "raid_bdev1", 00:08:55.803 "uuid": "015a503e-a36f-4e8c-a0b5-9e7f356441e5", 00:08:55.803 "strip_size_kb": 64, 00:08:55.803 "state": "configuring", 00:08:55.803 "raid_level": "concat", 00:08:55.803 "superblock": true, 00:08:55.803 "num_base_bdevs": 3, 00:08:55.803 "num_base_bdevs_discovered": 1, 00:08:55.803 "num_base_bdevs_operational": 3, 00:08:55.803 "base_bdevs_list": [ 00:08:55.803 { 00:08:55.803 "name": "pt1", 00:08:55.803 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:55.803 "is_configured": true, 00:08:55.803 "data_offset": 2048, 00:08:55.803 "data_size": 63488 00:08:55.803 }, 00:08:55.803 { 00:08:55.803 "name": null, 00:08:55.803 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:55.803 "is_configured": false, 00:08:55.803 "data_offset": 0, 00:08:55.803 "data_size": 63488 00:08:55.803 }, 00:08:55.803 { 00:08:55.803 "name": null, 00:08:55.803 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:55.803 "is_configured": false, 00:08:55.803 "data_offset": 2048, 00:08:55.803 "data_size": 63488 00:08:55.803 } 00:08:55.803 ] 00:08:55.803 }' 00:08:55.803 20:04:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:55.803 20:04:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.372 20:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:08:56.372 20:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:56.372 20:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:56.372 20:04:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.372 20:04:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.372 [2024-12-08 20:04:28.159297] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:56.372 [2024-12-08 20:04:28.159370] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:56.372 [2024-12-08 20:04:28.159389] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:08:56.372 [2024-12-08 20:04:28.159400] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:56.372 [2024-12-08 20:04:28.159882] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:56.372 [2024-12-08 20:04:28.159917] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:56.372 [2024-12-08 20:04:28.160019] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:56.372 [2024-12-08 20:04:28.160049] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:56.372 pt2 00:08:56.372 20:04:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.372 20:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:56.372 20:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:56.372 20:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:56.372 20:04:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.372 20:04:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.372 [2024-12-08 20:04:28.171265] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:56.372 [2024-12-08 20:04:28.171320] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:56.372 [2024-12-08 20:04:28.171335] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:08:56.372 [2024-12-08 20:04:28.171345] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:56.372 [2024-12-08 20:04:28.171709] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:56.372 [2024-12-08 20:04:28.171738] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:56.372 [2024-12-08 20:04:28.171798] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:08:56.372 [2024-12-08 20:04:28.171818] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:56.372 [2024-12-08 20:04:28.171988] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:56.372 [2024-12-08 20:04:28.172006] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:56.372 [2024-12-08 20:04:28.172257] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:08:56.372 [2024-12-08 20:04:28.172412] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:56.372 [2024-12-08 20:04:28.172420] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:56.372 [2024-12-08 20:04:28.172574] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:56.372 pt3 00:08:56.372 20:04:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.372 20:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:56.372 20:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:56.372 20:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:08:56.372 20:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:56.372 20:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:56.372 20:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:56.372 20:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:56.372 20:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:56.372 20:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:56.372 20:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:56.372 20:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:56.372 20:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:56.372 20:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:56.372 20:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:56.372 20:04:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.372 20:04:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.372 20:04:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.372 20:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:56.372 "name": "raid_bdev1", 00:08:56.372 "uuid": "015a503e-a36f-4e8c-a0b5-9e7f356441e5", 00:08:56.372 "strip_size_kb": 64, 00:08:56.372 "state": "online", 00:08:56.372 "raid_level": "concat", 00:08:56.372 "superblock": true, 00:08:56.372 "num_base_bdevs": 3, 00:08:56.372 "num_base_bdevs_discovered": 3, 00:08:56.372 "num_base_bdevs_operational": 3, 00:08:56.372 "base_bdevs_list": [ 00:08:56.372 { 00:08:56.372 "name": "pt1", 00:08:56.372 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:56.372 "is_configured": true, 00:08:56.372 "data_offset": 2048, 00:08:56.372 "data_size": 63488 00:08:56.372 }, 00:08:56.372 { 00:08:56.372 "name": "pt2", 00:08:56.372 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:56.372 "is_configured": true, 00:08:56.372 "data_offset": 2048, 00:08:56.372 "data_size": 63488 00:08:56.372 }, 00:08:56.372 { 00:08:56.372 "name": "pt3", 00:08:56.372 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:56.372 "is_configured": true, 00:08:56.372 "data_offset": 2048, 00:08:56.372 "data_size": 63488 00:08:56.372 } 00:08:56.372 ] 00:08:56.372 }' 00:08:56.372 20:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:56.372 20:04:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.632 20:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:08:56.632 20:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:56.632 20:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:56.632 20:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:56.632 20:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:56.632 20:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:56.632 20:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:56.632 20:04:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.632 20:04:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.632 20:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:56.632 [2024-12-08 20:04:28.578856] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:56.632 20:04:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.893 20:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:56.893 "name": "raid_bdev1", 00:08:56.893 "aliases": [ 00:08:56.893 "015a503e-a36f-4e8c-a0b5-9e7f356441e5" 00:08:56.893 ], 00:08:56.893 "product_name": "Raid Volume", 00:08:56.893 "block_size": 512, 00:08:56.893 "num_blocks": 190464, 00:08:56.893 "uuid": "015a503e-a36f-4e8c-a0b5-9e7f356441e5", 00:08:56.893 "assigned_rate_limits": { 00:08:56.893 "rw_ios_per_sec": 0, 00:08:56.893 "rw_mbytes_per_sec": 0, 00:08:56.893 "r_mbytes_per_sec": 0, 00:08:56.893 "w_mbytes_per_sec": 0 00:08:56.893 }, 00:08:56.893 "claimed": false, 00:08:56.893 "zoned": false, 00:08:56.893 "supported_io_types": { 00:08:56.893 "read": true, 00:08:56.893 "write": true, 00:08:56.893 "unmap": true, 00:08:56.893 "flush": true, 00:08:56.893 "reset": true, 00:08:56.893 "nvme_admin": false, 00:08:56.893 "nvme_io": false, 00:08:56.893 "nvme_io_md": false, 00:08:56.893 "write_zeroes": true, 00:08:56.893 "zcopy": false, 00:08:56.893 "get_zone_info": false, 00:08:56.893 "zone_management": false, 00:08:56.893 "zone_append": false, 00:08:56.893 "compare": false, 00:08:56.893 "compare_and_write": false, 00:08:56.893 "abort": false, 00:08:56.893 "seek_hole": false, 00:08:56.893 "seek_data": false, 00:08:56.893 "copy": false, 00:08:56.893 "nvme_iov_md": false 00:08:56.893 }, 00:08:56.893 "memory_domains": [ 00:08:56.893 { 00:08:56.893 "dma_device_id": "system", 00:08:56.893 "dma_device_type": 1 00:08:56.893 }, 00:08:56.893 { 00:08:56.893 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:56.893 "dma_device_type": 2 00:08:56.893 }, 00:08:56.893 { 00:08:56.893 "dma_device_id": "system", 00:08:56.893 "dma_device_type": 1 00:08:56.893 }, 00:08:56.893 { 00:08:56.893 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:56.893 "dma_device_type": 2 00:08:56.893 }, 00:08:56.893 { 00:08:56.893 "dma_device_id": "system", 00:08:56.893 "dma_device_type": 1 00:08:56.893 }, 00:08:56.893 { 00:08:56.893 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:56.893 "dma_device_type": 2 00:08:56.893 } 00:08:56.893 ], 00:08:56.893 "driver_specific": { 00:08:56.893 "raid": { 00:08:56.893 "uuid": "015a503e-a36f-4e8c-a0b5-9e7f356441e5", 00:08:56.893 "strip_size_kb": 64, 00:08:56.893 "state": "online", 00:08:56.893 "raid_level": "concat", 00:08:56.893 "superblock": true, 00:08:56.893 "num_base_bdevs": 3, 00:08:56.893 "num_base_bdevs_discovered": 3, 00:08:56.893 "num_base_bdevs_operational": 3, 00:08:56.893 "base_bdevs_list": [ 00:08:56.893 { 00:08:56.893 "name": "pt1", 00:08:56.893 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:56.893 "is_configured": true, 00:08:56.893 "data_offset": 2048, 00:08:56.893 "data_size": 63488 00:08:56.893 }, 00:08:56.893 { 00:08:56.893 "name": "pt2", 00:08:56.893 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:56.893 "is_configured": true, 00:08:56.893 "data_offset": 2048, 00:08:56.893 "data_size": 63488 00:08:56.893 }, 00:08:56.893 { 00:08:56.893 "name": "pt3", 00:08:56.893 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:56.893 "is_configured": true, 00:08:56.893 "data_offset": 2048, 00:08:56.893 "data_size": 63488 00:08:56.893 } 00:08:56.893 ] 00:08:56.893 } 00:08:56.893 } 00:08:56.893 }' 00:08:56.893 20:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:56.893 20:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:56.893 pt2 00:08:56.893 pt3' 00:08:56.893 20:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:56.893 20:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:56.893 20:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:56.893 20:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:56.893 20:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:56.893 20:04:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.893 20:04:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.893 20:04:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.893 20:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:56.893 20:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:56.893 20:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:56.893 20:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:56.893 20:04:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.893 20:04:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.893 20:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:56.893 20:04:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.893 20:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:56.893 20:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:56.893 20:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:56.893 20:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:08:56.893 20:04:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.893 20:04:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.893 20:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:56.893 20:04:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.893 20:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:56.893 20:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:56.893 20:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:56.893 20:04:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.893 20:04:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.893 20:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:08:56.893 [2024-12-08 20:04:28.850335] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:56.893 20:04:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.154 20:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 015a503e-a36f-4e8c-a0b5-9e7f356441e5 '!=' 015a503e-a36f-4e8c-a0b5-9e7f356441e5 ']' 00:08:57.154 20:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:08:57.154 20:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:57.154 20:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:57.154 20:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 66682 00:08:57.154 20:04:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 66682 ']' 00:08:57.154 20:04:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 66682 00:08:57.154 20:04:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:08:57.154 20:04:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:57.154 20:04:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66682 00:08:57.154 20:04:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:57.154 20:04:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:57.154 20:04:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66682' 00:08:57.154 killing process with pid 66682 00:08:57.154 20:04:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 66682 00:08:57.154 [2024-12-08 20:04:28.916269] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:57.154 [2024-12-08 20:04:28.916425] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:57.154 20:04:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 66682 00:08:57.154 [2024-12-08 20:04:28.916559] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:57.154 [2024-12-08 20:04:28.916614] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:57.414 [2024-12-08 20:04:29.215485] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:58.353 20:04:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:08:58.353 00:08:58.353 real 0m5.139s 00:08:58.353 user 0m7.342s 00:08:58.353 sys 0m0.832s 00:08:58.354 20:04:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:58.354 ************************************ 00:08:58.354 END TEST raid_superblock_test 00:08:58.354 20:04:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.354 ************************************ 00:08:58.613 20:04:30 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:08:58.613 20:04:30 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:58.613 20:04:30 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:58.613 20:04:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:58.613 ************************************ 00:08:58.613 START TEST raid_read_error_test 00:08:58.613 ************************************ 00:08:58.613 20:04:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 3 read 00:08:58.613 20:04:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:08:58.613 20:04:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:08:58.613 20:04:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:08:58.613 20:04:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:58.613 20:04:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:58.613 20:04:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:58.613 20:04:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:58.613 20:04:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:58.613 20:04:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:58.613 20:04:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:58.613 20:04:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:58.613 20:04:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:08:58.613 20:04:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:58.613 20:04:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:58.613 20:04:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:58.613 20:04:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:58.613 20:04:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:58.613 20:04:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:58.613 20:04:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:58.613 20:04:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:58.613 20:04:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:58.613 20:04:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:08:58.613 20:04:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:58.613 20:04:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:58.613 20:04:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:58.613 20:04:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.tXWoyNpQTF 00:08:58.613 20:04:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:58.613 20:04:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=66930 00:08:58.613 20:04:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 66930 00:08:58.613 20:04:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 66930 ']' 00:08:58.613 20:04:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:58.613 20:04:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:58.613 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:58.613 20:04:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:58.613 20:04:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:58.613 20:04:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.613 [2024-12-08 20:04:30.519315] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:08:58.613 [2024-12-08 20:04:30.519451] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66930 ] 00:08:58.874 [2024-12-08 20:04:30.716280] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:58.874 [2024-12-08 20:04:30.832357] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:59.134 [2024-12-08 20:04:31.026990] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:59.134 [2024-12-08 20:04:31.027036] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:59.394 20:04:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:59.394 20:04:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:59.394 20:04:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:59.394 20:04:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:59.394 20:04:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.394 20:04:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.394 BaseBdev1_malloc 00:08:59.394 20:04:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.394 20:04:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:59.394 20:04:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.394 20:04:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.655 true 00:08:59.655 20:04:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.655 20:04:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:59.655 20:04:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.655 20:04:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.655 [2024-12-08 20:04:31.382815] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:59.655 [2024-12-08 20:04:31.382890] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:59.655 [2024-12-08 20:04:31.382913] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:59.655 [2024-12-08 20:04:31.382924] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:59.655 [2024-12-08 20:04:31.385168] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:59.655 [2024-12-08 20:04:31.385204] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:59.655 BaseBdev1 00:08:59.655 20:04:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.655 20:04:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:59.655 20:04:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:59.655 20:04:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.655 20:04:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.655 BaseBdev2_malloc 00:08:59.655 20:04:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.655 20:04:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:59.655 20:04:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.655 20:04:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.655 true 00:08:59.655 20:04:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.655 20:04:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:59.655 20:04:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.655 20:04:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.655 [2024-12-08 20:04:31.448401] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:59.655 [2024-12-08 20:04:31.448471] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:59.655 [2024-12-08 20:04:31.448489] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:59.655 [2024-12-08 20:04:31.448499] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:59.655 [2024-12-08 20:04:31.450542] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:59.655 [2024-12-08 20:04:31.450581] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:59.655 BaseBdev2 00:08:59.655 20:04:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.655 20:04:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:59.655 20:04:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:08:59.655 20:04:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.655 20:04:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.655 BaseBdev3_malloc 00:08:59.655 20:04:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.655 20:04:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:08:59.655 20:04:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.655 20:04:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.655 true 00:08:59.655 20:04:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.655 20:04:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:08:59.655 20:04:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.655 20:04:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.655 [2024-12-08 20:04:31.528345] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:08:59.655 [2024-12-08 20:04:31.528394] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:59.655 [2024-12-08 20:04:31.528410] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:08:59.655 [2024-12-08 20:04:31.528420] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:59.655 [2024-12-08 20:04:31.530623] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:59.655 [2024-12-08 20:04:31.530662] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:08:59.655 BaseBdev3 00:08:59.655 20:04:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.655 20:04:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:08:59.655 20:04:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.655 20:04:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.655 [2024-12-08 20:04:31.540412] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:59.655 [2024-12-08 20:04:31.542249] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:59.655 [2024-12-08 20:04:31.542328] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:59.655 [2024-12-08 20:04:31.542550] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:59.655 [2024-12-08 20:04:31.542571] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:59.655 [2024-12-08 20:04:31.542842] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:08:59.655 [2024-12-08 20:04:31.543008] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:59.655 [2024-12-08 20:04:31.543027] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:08:59.655 [2024-12-08 20:04:31.543217] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:59.655 20:04:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.655 20:04:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:08:59.655 20:04:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:59.655 20:04:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:59.655 20:04:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:59.655 20:04:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:59.655 20:04:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:59.655 20:04:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:59.655 20:04:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:59.655 20:04:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:59.655 20:04:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:59.655 20:04:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:59.655 20:04:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.655 20:04:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:59.655 20:04:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.655 20:04:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.655 20:04:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:59.655 "name": "raid_bdev1", 00:08:59.655 "uuid": "8abbfd19-ea08-401c-90d8-9238168835fb", 00:08:59.655 "strip_size_kb": 64, 00:08:59.655 "state": "online", 00:08:59.655 "raid_level": "concat", 00:08:59.655 "superblock": true, 00:08:59.655 "num_base_bdevs": 3, 00:08:59.655 "num_base_bdevs_discovered": 3, 00:08:59.655 "num_base_bdevs_operational": 3, 00:08:59.655 "base_bdevs_list": [ 00:08:59.655 { 00:08:59.655 "name": "BaseBdev1", 00:08:59.655 "uuid": "daf7ce3d-9a2f-5b49-9843-b502c38a4613", 00:08:59.655 "is_configured": true, 00:08:59.655 "data_offset": 2048, 00:08:59.655 "data_size": 63488 00:08:59.655 }, 00:08:59.655 { 00:08:59.655 "name": "BaseBdev2", 00:08:59.655 "uuid": "b0b34d14-b897-5c1e-9204-2d72fd3ee90c", 00:08:59.655 "is_configured": true, 00:08:59.655 "data_offset": 2048, 00:08:59.655 "data_size": 63488 00:08:59.655 }, 00:08:59.655 { 00:08:59.655 "name": "BaseBdev3", 00:08:59.655 "uuid": "fed2527d-13ef-5076-8ab1-07e06cee9bf1", 00:08:59.655 "is_configured": true, 00:08:59.655 "data_offset": 2048, 00:08:59.655 "data_size": 63488 00:08:59.655 } 00:08:59.655 ] 00:08:59.655 }' 00:08:59.655 20:04:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:59.655 20:04:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.225 20:04:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:00.225 20:04:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:00.225 [2024-12-08 20:04:32.044865] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:09:01.165 20:04:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:01.165 20:04:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.165 20:04:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.165 20:04:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.165 20:04:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:01.165 20:04:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:09:01.165 20:04:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:01.165 20:04:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:01.165 20:04:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:01.165 20:04:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:01.165 20:04:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:01.165 20:04:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:01.165 20:04:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:01.165 20:04:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:01.165 20:04:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:01.165 20:04:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:01.165 20:04:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:01.165 20:04:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:01.165 20:04:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:01.165 20:04:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.165 20:04:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.165 20:04:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.165 20:04:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:01.165 "name": "raid_bdev1", 00:09:01.165 "uuid": "8abbfd19-ea08-401c-90d8-9238168835fb", 00:09:01.165 "strip_size_kb": 64, 00:09:01.165 "state": "online", 00:09:01.165 "raid_level": "concat", 00:09:01.165 "superblock": true, 00:09:01.165 "num_base_bdevs": 3, 00:09:01.165 "num_base_bdevs_discovered": 3, 00:09:01.165 "num_base_bdevs_operational": 3, 00:09:01.165 "base_bdevs_list": [ 00:09:01.165 { 00:09:01.165 "name": "BaseBdev1", 00:09:01.165 "uuid": "daf7ce3d-9a2f-5b49-9843-b502c38a4613", 00:09:01.165 "is_configured": true, 00:09:01.165 "data_offset": 2048, 00:09:01.165 "data_size": 63488 00:09:01.165 }, 00:09:01.165 { 00:09:01.165 "name": "BaseBdev2", 00:09:01.165 "uuid": "b0b34d14-b897-5c1e-9204-2d72fd3ee90c", 00:09:01.165 "is_configured": true, 00:09:01.165 "data_offset": 2048, 00:09:01.165 "data_size": 63488 00:09:01.165 }, 00:09:01.165 { 00:09:01.165 "name": "BaseBdev3", 00:09:01.165 "uuid": "fed2527d-13ef-5076-8ab1-07e06cee9bf1", 00:09:01.165 "is_configured": true, 00:09:01.165 "data_offset": 2048, 00:09:01.165 "data_size": 63488 00:09:01.165 } 00:09:01.165 ] 00:09:01.165 }' 00:09:01.165 20:04:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:01.165 20:04:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.425 20:04:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:01.425 20:04:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.425 20:04:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.425 [2024-12-08 20:04:33.392815] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:01.425 [2024-12-08 20:04:33.392849] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:01.426 [2024-12-08 20:04:33.395612] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:01.426 [2024-12-08 20:04:33.395664] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:01.426 [2024-12-08 20:04:33.395701] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:01.426 [2024-12-08 20:04:33.395713] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:01.426 { 00:09:01.426 "results": [ 00:09:01.426 { 00:09:01.426 "job": "raid_bdev1", 00:09:01.426 "core_mask": "0x1", 00:09:01.426 "workload": "randrw", 00:09:01.426 "percentage": 50, 00:09:01.426 "status": "finished", 00:09:01.426 "queue_depth": 1, 00:09:01.426 "io_size": 131072, 00:09:01.426 "runtime": 1.348874, 00:09:01.426 "iops": 15391.356049564303, 00:09:01.426 "mibps": 1923.919506195538, 00:09:01.426 "io_failed": 1, 00:09:01.426 "io_timeout": 0, 00:09:01.426 "avg_latency_us": 90.09698319359899, 00:09:01.426 "min_latency_us": 25.041048034934498, 00:09:01.426 "max_latency_us": 1423.7624454148472 00:09:01.426 } 00:09:01.426 ], 00:09:01.426 "core_count": 1 00:09:01.426 } 00:09:01.426 20:04:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.426 20:04:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 66930 00:09:01.426 20:04:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 66930 ']' 00:09:01.426 20:04:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 66930 00:09:01.426 20:04:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:09:01.685 20:04:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:01.685 20:04:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66930 00:09:01.686 20:04:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:01.686 20:04:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:01.686 killing process with pid 66930 00:09:01.686 20:04:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66930' 00:09:01.686 20:04:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 66930 00:09:01.686 [2024-12-08 20:04:33.443791] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:01.686 20:04:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 66930 00:09:01.945 [2024-12-08 20:04:33.667964] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:02.886 20:04:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:02.886 20:04:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.tXWoyNpQTF 00:09:02.886 20:04:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:02.886 20:04:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:09:02.886 20:04:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:09:02.886 20:04:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:02.886 20:04:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:02.886 20:04:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:09:02.886 00:09:02.886 real 0m4.452s 00:09:02.886 user 0m5.225s 00:09:02.886 sys 0m0.584s 00:09:02.886 20:04:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:02.886 20:04:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.886 ************************************ 00:09:02.886 END TEST raid_read_error_test 00:09:02.886 ************************************ 00:09:03.145 20:04:34 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:09:03.145 20:04:34 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:03.145 20:04:34 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:03.145 20:04:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:03.145 ************************************ 00:09:03.145 START TEST raid_write_error_test 00:09:03.145 ************************************ 00:09:03.145 20:04:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 3 write 00:09:03.145 20:04:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:09:03.145 20:04:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:03.145 20:04:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:03.145 20:04:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:03.145 20:04:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:03.145 20:04:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:03.145 20:04:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:03.145 20:04:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:03.145 20:04:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:03.145 20:04:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:03.145 20:04:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:03.145 20:04:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:03.145 20:04:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:03.145 20:04:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:03.145 20:04:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:03.145 20:04:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:03.145 20:04:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:03.145 20:04:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:03.145 20:04:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:03.145 20:04:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:03.145 20:04:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:03.145 20:04:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:09:03.145 20:04:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:03.145 20:04:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:03.145 20:04:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:03.146 20:04:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.hZ2CbRoEMY 00:09:03.146 20:04:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67081 00:09:03.146 20:04:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:03.146 20:04:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67081 00:09:03.146 20:04:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 67081 ']' 00:09:03.146 20:04:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:03.146 20:04:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:03.146 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:03.146 20:04:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:03.146 20:04:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:03.146 20:04:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.146 [2024-12-08 20:04:35.009597] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:09:03.146 [2024-12-08 20:04:35.009707] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67081 ] 00:09:03.405 [2024-12-08 20:04:35.185260] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:03.405 [2024-12-08 20:04:35.300142] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:03.665 [2024-12-08 20:04:35.503295] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:03.665 [2024-12-08 20:04:35.503366] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:03.925 20:04:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:03.925 20:04:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:03.925 20:04:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:03.925 20:04:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:03.925 20:04:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.925 20:04:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.925 BaseBdev1_malloc 00:09:03.925 20:04:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.925 20:04:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:03.925 20:04:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.925 20:04:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.925 true 00:09:03.925 20:04:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.925 20:04:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:03.925 20:04:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.925 20:04:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.925 [2024-12-08 20:04:35.899335] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:03.925 [2024-12-08 20:04:35.899388] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:03.925 [2024-12-08 20:04:35.899411] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:03.925 [2024-12-08 20:04:35.899424] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:04.199 [2024-12-08 20:04:35.901807] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:04.199 [2024-12-08 20:04:35.901849] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:04.199 BaseBdev1 00:09:04.199 20:04:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.199 20:04:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:04.199 20:04:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:04.199 20:04:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.199 20:04:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.199 BaseBdev2_malloc 00:09:04.199 20:04:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.199 20:04:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:04.199 20:04:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.199 20:04:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.199 true 00:09:04.199 20:04:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.199 20:04:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:04.199 20:04:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.199 20:04:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.199 [2024-12-08 20:04:35.968587] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:04.199 [2024-12-08 20:04:35.968635] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:04.199 [2024-12-08 20:04:35.968652] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:04.199 [2024-12-08 20:04:35.968663] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:04.199 [2024-12-08 20:04:35.970753] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:04.199 [2024-12-08 20:04:35.970787] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:04.199 BaseBdev2 00:09:04.199 20:04:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.199 20:04:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:04.199 20:04:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:04.200 20:04:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.200 20:04:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.200 BaseBdev3_malloc 00:09:04.200 20:04:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.200 20:04:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:04.200 20:04:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.200 20:04:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.200 true 00:09:04.200 20:04:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.200 20:04:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:04.200 20:04:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.200 20:04:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.200 [2024-12-08 20:04:36.049709] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:04.200 [2024-12-08 20:04:36.049773] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:04.200 [2024-12-08 20:04:36.049792] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:04.200 [2024-12-08 20:04:36.049802] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:04.200 [2024-12-08 20:04:36.052079] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:04.200 [2024-12-08 20:04:36.052122] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:04.200 BaseBdev3 00:09:04.200 20:04:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.200 20:04:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:04.200 20:04:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.200 20:04:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.200 [2024-12-08 20:04:36.061784] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:04.200 [2024-12-08 20:04:36.063832] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:04.200 [2024-12-08 20:04:36.063922] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:04.200 [2024-12-08 20:04:36.064157] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:04.200 [2024-12-08 20:04:36.064180] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:04.200 [2024-12-08 20:04:36.064492] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:09:04.200 [2024-12-08 20:04:36.064690] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:04.200 [2024-12-08 20:04:36.064715] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:04.200 [2024-12-08 20:04:36.064900] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:04.200 20:04:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.200 20:04:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:04.200 20:04:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:04.200 20:04:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:04.200 20:04:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:04.200 20:04:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:04.200 20:04:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:04.200 20:04:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:04.200 20:04:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:04.200 20:04:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:04.200 20:04:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:04.200 20:04:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:04.200 20:04:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.200 20:04:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:04.200 20:04:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.200 20:04:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.200 20:04:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:04.200 "name": "raid_bdev1", 00:09:04.200 "uuid": "7f79c313-7bc9-4ba0-9dc8-e2bda24cd8ca", 00:09:04.200 "strip_size_kb": 64, 00:09:04.200 "state": "online", 00:09:04.200 "raid_level": "concat", 00:09:04.200 "superblock": true, 00:09:04.200 "num_base_bdevs": 3, 00:09:04.200 "num_base_bdevs_discovered": 3, 00:09:04.200 "num_base_bdevs_operational": 3, 00:09:04.200 "base_bdevs_list": [ 00:09:04.200 { 00:09:04.200 "name": "BaseBdev1", 00:09:04.200 "uuid": "da9b4ab9-3cb4-54f6-9ddb-f9d36cdf42f9", 00:09:04.200 "is_configured": true, 00:09:04.200 "data_offset": 2048, 00:09:04.200 "data_size": 63488 00:09:04.200 }, 00:09:04.200 { 00:09:04.200 "name": "BaseBdev2", 00:09:04.200 "uuid": "d4600edf-9ab4-5a14-9977-6ce73fabb662", 00:09:04.200 "is_configured": true, 00:09:04.200 "data_offset": 2048, 00:09:04.200 "data_size": 63488 00:09:04.200 }, 00:09:04.200 { 00:09:04.200 "name": "BaseBdev3", 00:09:04.200 "uuid": "25783c71-ca16-5f36-8b79-6b6ddc8a5f9f", 00:09:04.200 "is_configured": true, 00:09:04.200 "data_offset": 2048, 00:09:04.200 "data_size": 63488 00:09:04.200 } 00:09:04.200 ] 00:09:04.200 }' 00:09:04.200 20:04:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:04.200 20:04:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.779 20:04:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:04.779 20:04:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:04.779 [2024-12-08 20:04:36.634225] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:09:05.738 20:04:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:05.738 20:04:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.738 20:04:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.738 20:04:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.738 20:04:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:05.738 20:04:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:09:05.738 20:04:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:05.738 20:04:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:05.738 20:04:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:05.738 20:04:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:05.738 20:04:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:05.738 20:04:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:05.738 20:04:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:05.738 20:04:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:05.738 20:04:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:05.738 20:04:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:05.738 20:04:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:05.738 20:04:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:05.738 20:04:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:05.738 20:04:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.738 20:04:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.738 20:04:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.738 20:04:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:05.738 "name": "raid_bdev1", 00:09:05.738 "uuid": "7f79c313-7bc9-4ba0-9dc8-e2bda24cd8ca", 00:09:05.738 "strip_size_kb": 64, 00:09:05.738 "state": "online", 00:09:05.738 "raid_level": "concat", 00:09:05.738 "superblock": true, 00:09:05.738 "num_base_bdevs": 3, 00:09:05.738 "num_base_bdevs_discovered": 3, 00:09:05.738 "num_base_bdevs_operational": 3, 00:09:05.738 "base_bdevs_list": [ 00:09:05.738 { 00:09:05.738 "name": "BaseBdev1", 00:09:05.738 "uuid": "da9b4ab9-3cb4-54f6-9ddb-f9d36cdf42f9", 00:09:05.738 "is_configured": true, 00:09:05.738 "data_offset": 2048, 00:09:05.738 "data_size": 63488 00:09:05.738 }, 00:09:05.738 { 00:09:05.738 "name": "BaseBdev2", 00:09:05.738 "uuid": "d4600edf-9ab4-5a14-9977-6ce73fabb662", 00:09:05.738 "is_configured": true, 00:09:05.738 "data_offset": 2048, 00:09:05.738 "data_size": 63488 00:09:05.738 }, 00:09:05.738 { 00:09:05.738 "name": "BaseBdev3", 00:09:05.738 "uuid": "25783c71-ca16-5f36-8b79-6b6ddc8a5f9f", 00:09:05.738 "is_configured": true, 00:09:05.738 "data_offset": 2048, 00:09:05.738 "data_size": 63488 00:09:05.738 } 00:09:05.738 ] 00:09:05.738 }' 00:09:05.738 20:04:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:05.738 20:04:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.308 20:04:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:06.308 20:04:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.308 20:04:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.308 [2024-12-08 20:04:38.034388] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:06.308 [2024-12-08 20:04:38.034425] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:06.308 [2024-12-08 20:04:38.037207] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:06.308 [2024-12-08 20:04:38.037257] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:06.308 [2024-12-08 20:04:38.037294] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:06.308 [2024-12-08 20:04:38.037307] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:06.308 { 00:09:06.308 "results": [ 00:09:06.308 { 00:09:06.308 "job": "raid_bdev1", 00:09:06.308 "core_mask": "0x1", 00:09:06.308 "workload": "randrw", 00:09:06.308 "percentage": 50, 00:09:06.308 "status": "finished", 00:09:06.308 "queue_depth": 1, 00:09:06.308 "io_size": 131072, 00:09:06.308 "runtime": 1.401183, 00:09:06.308 "iops": 15140.063788955475, 00:09:06.308 "mibps": 1892.5079736194343, 00:09:06.308 "io_failed": 1, 00:09:06.308 "io_timeout": 0, 00:09:06.308 "avg_latency_us": 91.56462855337381, 00:09:06.308 "min_latency_us": 27.165065502183406, 00:09:06.308 "max_latency_us": 1595.4724890829693 00:09:06.308 } 00:09:06.308 ], 00:09:06.308 "core_count": 1 00:09:06.308 } 00:09:06.308 20:04:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.308 20:04:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67081 00:09:06.308 20:04:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 67081 ']' 00:09:06.308 20:04:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 67081 00:09:06.308 20:04:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:09:06.308 20:04:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:06.308 20:04:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67081 00:09:06.308 killing process with pid 67081 00:09:06.308 20:04:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:06.308 20:04:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:06.308 20:04:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67081' 00:09:06.308 20:04:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 67081 00:09:06.308 [2024-12-08 20:04:38.085558] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:06.308 20:04:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 67081 00:09:06.567 [2024-12-08 20:04:38.313655] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:07.506 20:04:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.hZ2CbRoEMY 00:09:07.506 20:04:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:07.506 20:04:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:07.765 20:04:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:09:07.765 20:04:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:09:07.765 20:04:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:07.765 20:04:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:07.765 20:04:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:09:07.765 00:09:07.765 real 0m4.586s 00:09:07.765 user 0m5.450s 00:09:07.765 sys 0m0.590s 00:09:07.765 20:04:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:07.765 20:04:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.765 ************************************ 00:09:07.765 END TEST raid_write_error_test 00:09:07.765 ************************************ 00:09:07.765 20:04:39 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:07.765 20:04:39 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:09:07.765 20:04:39 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:07.765 20:04:39 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:07.765 20:04:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:07.765 ************************************ 00:09:07.765 START TEST raid_state_function_test 00:09:07.765 ************************************ 00:09:07.765 20:04:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 3 false 00:09:07.765 20:04:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:09:07.765 20:04:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:07.765 20:04:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:07.765 20:04:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:07.765 20:04:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:07.765 20:04:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:07.765 20:04:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:07.765 20:04:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:07.765 20:04:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:07.765 20:04:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:07.765 20:04:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:07.765 20:04:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:07.765 20:04:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:07.765 20:04:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:07.765 20:04:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:07.765 20:04:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:07.765 20:04:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:07.765 20:04:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:07.765 20:04:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:07.765 20:04:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:07.765 20:04:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:07.765 20:04:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:09:07.765 20:04:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:09:07.765 20:04:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:07.765 20:04:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:07.765 20:04:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=67219 00:09:07.765 Process raid pid: 67219 00:09:07.765 20:04:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:07.765 20:04:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 67219' 00:09:07.765 20:04:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 67219 00:09:07.765 20:04:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 67219 ']' 00:09:07.765 20:04:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:07.765 20:04:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:07.765 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:07.765 20:04:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:07.765 20:04:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:07.765 20:04:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.765 [2024-12-08 20:04:39.655658] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:09:07.765 [2024-12-08 20:04:39.655773] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:08.023 [2024-12-08 20:04:39.830798] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:08.023 [2024-12-08 20:04:39.942598] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:08.281 [2024-12-08 20:04:40.145886] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:08.281 [2024-12-08 20:04:40.145965] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:08.540 20:04:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:08.540 20:04:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:09:08.540 20:04:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:08.540 20:04:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.540 20:04:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.540 [2024-12-08 20:04:40.488278] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:08.540 [2024-12-08 20:04:40.488335] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:08.540 [2024-12-08 20:04:40.488345] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:08.540 [2024-12-08 20:04:40.488355] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:08.540 [2024-12-08 20:04:40.488378] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:08.540 [2024-12-08 20:04:40.488387] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:08.540 20:04:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.540 20:04:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:08.540 20:04:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:08.540 20:04:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:08.540 20:04:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:08.540 20:04:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:08.540 20:04:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:08.540 20:04:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:08.540 20:04:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:08.540 20:04:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:08.540 20:04:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:08.540 20:04:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:08.540 20:04:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:08.540 20:04:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.540 20:04:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.540 20:04:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.799 20:04:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:08.799 "name": "Existed_Raid", 00:09:08.799 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:08.799 "strip_size_kb": 0, 00:09:08.799 "state": "configuring", 00:09:08.799 "raid_level": "raid1", 00:09:08.799 "superblock": false, 00:09:08.799 "num_base_bdevs": 3, 00:09:08.799 "num_base_bdevs_discovered": 0, 00:09:08.799 "num_base_bdevs_operational": 3, 00:09:08.799 "base_bdevs_list": [ 00:09:08.799 { 00:09:08.799 "name": "BaseBdev1", 00:09:08.799 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:08.799 "is_configured": false, 00:09:08.799 "data_offset": 0, 00:09:08.799 "data_size": 0 00:09:08.799 }, 00:09:08.799 { 00:09:08.799 "name": "BaseBdev2", 00:09:08.799 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:08.799 "is_configured": false, 00:09:08.799 "data_offset": 0, 00:09:08.799 "data_size": 0 00:09:08.799 }, 00:09:08.799 { 00:09:08.799 "name": "BaseBdev3", 00:09:08.799 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:08.799 "is_configured": false, 00:09:08.799 "data_offset": 0, 00:09:08.799 "data_size": 0 00:09:08.799 } 00:09:08.799 ] 00:09:08.799 }' 00:09:08.799 20:04:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:08.799 20:04:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.058 20:04:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:09.058 20:04:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.058 20:04:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.058 [2024-12-08 20:04:40.927634] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:09.058 [2024-12-08 20:04:40.927676] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:09.058 20:04:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.058 20:04:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:09.058 20:04:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.058 20:04:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.058 [2024-12-08 20:04:40.939615] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:09.058 [2024-12-08 20:04:40.939667] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:09.058 [2024-12-08 20:04:40.939676] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:09.058 [2024-12-08 20:04:40.939685] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:09.058 [2024-12-08 20:04:40.939709] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:09.058 [2024-12-08 20:04:40.939717] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:09.058 20:04:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.058 20:04:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:09.058 20:04:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.058 20:04:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.058 [2024-12-08 20:04:40.985710] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:09.058 BaseBdev1 00:09:09.058 20:04:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.058 20:04:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:09.058 20:04:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:09.058 20:04:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:09.058 20:04:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:09.058 20:04:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:09.058 20:04:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:09.058 20:04:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:09.058 20:04:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.058 20:04:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.058 20:04:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.059 20:04:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:09.059 20:04:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.059 20:04:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.059 [ 00:09:09.059 { 00:09:09.059 "name": "BaseBdev1", 00:09:09.059 "aliases": [ 00:09:09.059 "5a382a4e-0a08-4c4c-9511-55909af218e9" 00:09:09.059 ], 00:09:09.059 "product_name": "Malloc disk", 00:09:09.059 "block_size": 512, 00:09:09.059 "num_blocks": 65536, 00:09:09.059 "uuid": "5a382a4e-0a08-4c4c-9511-55909af218e9", 00:09:09.059 "assigned_rate_limits": { 00:09:09.059 "rw_ios_per_sec": 0, 00:09:09.059 "rw_mbytes_per_sec": 0, 00:09:09.059 "r_mbytes_per_sec": 0, 00:09:09.059 "w_mbytes_per_sec": 0 00:09:09.059 }, 00:09:09.059 "claimed": true, 00:09:09.059 "claim_type": "exclusive_write", 00:09:09.059 "zoned": false, 00:09:09.059 "supported_io_types": { 00:09:09.059 "read": true, 00:09:09.059 "write": true, 00:09:09.059 "unmap": true, 00:09:09.059 "flush": true, 00:09:09.059 "reset": true, 00:09:09.059 "nvme_admin": false, 00:09:09.059 "nvme_io": false, 00:09:09.059 "nvme_io_md": false, 00:09:09.059 "write_zeroes": true, 00:09:09.059 "zcopy": true, 00:09:09.059 "get_zone_info": false, 00:09:09.059 "zone_management": false, 00:09:09.059 "zone_append": false, 00:09:09.059 "compare": false, 00:09:09.059 "compare_and_write": false, 00:09:09.059 "abort": true, 00:09:09.059 "seek_hole": false, 00:09:09.059 "seek_data": false, 00:09:09.059 "copy": true, 00:09:09.059 "nvme_iov_md": false 00:09:09.059 }, 00:09:09.059 "memory_domains": [ 00:09:09.059 { 00:09:09.059 "dma_device_id": "system", 00:09:09.059 "dma_device_type": 1 00:09:09.059 }, 00:09:09.059 { 00:09:09.059 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:09.059 "dma_device_type": 2 00:09:09.059 } 00:09:09.059 ], 00:09:09.059 "driver_specific": {} 00:09:09.059 } 00:09:09.059 ] 00:09:09.059 20:04:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.059 20:04:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:09.059 20:04:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:09.059 20:04:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:09.059 20:04:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:09.059 20:04:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:09.059 20:04:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:09.059 20:04:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:09.059 20:04:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:09.059 20:04:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:09.059 20:04:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:09.059 20:04:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:09.059 20:04:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:09.059 20:04:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.059 20:04:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:09.059 20:04:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.319 20:04:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.319 20:04:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:09.319 "name": "Existed_Raid", 00:09:09.319 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:09.319 "strip_size_kb": 0, 00:09:09.319 "state": "configuring", 00:09:09.319 "raid_level": "raid1", 00:09:09.319 "superblock": false, 00:09:09.319 "num_base_bdevs": 3, 00:09:09.319 "num_base_bdevs_discovered": 1, 00:09:09.319 "num_base_bdevs_operational": 3, 00:09:09.319 "base_bdevs_list": [ 00:09:09.319 { 00:09:09.319 "name": "BaseBdev1", 00:09:09.319 "uuid": "5a382a4e-0a08-4c4c-9511-55909af218e9", 00:09:09.319 "is_configured": true, 00:09:09.319 "data_offset": 0, 00:09:09.319 "data_size": 65536 00:09:09.319 }, 00:09:09.319 { 00:09:09.319 "name": "BaseBdev2", 00:09:09.319 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:09.319 "is_configured": false, 00:09:09.319 "data_offset": 0, 00:09:09.319 "data_size": 0 00:09:09.319 }, 00:09:09.319 { 00:09:09.319 "name": "BaseBdev3", 00:09:09.319 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:09.319 "is_configured": false, 00:09:09.319 "data_offset": 0, 00:09:09.319 "data_size": 0 00:09:09.319 } 00:09:09.319 ] 00:09:09.319 }' 00:09:09.319 20:04:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:09.319 20:04:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.579 20:04:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:09.580 20:04:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.580 20:04:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.580 [2024-12-08 20:04:41.460945] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:09.580 [2024-12-08 20:04:41.461033] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:09.580 20:04:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.580 20:04:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:09.580 20:04:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.580 20:04:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.580 [2024-12-08 20:04:41.468980] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:09.580 [2024-12-08 20:04:41.470855] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:09.580 [2024-12-08 20:04:41.470899] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:09.580 [2024-12-08 20:04:41.470909] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:09.580 [2024-12-08 20:04:41.470917] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:09.580 20:04:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.580 20:04:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:09.580 20:04:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:09.580 20:04:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:09.580 20:04:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:09.580 20:04:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:09.580 20:04:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:09.580 20:04:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:09.580 20:04:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:09.580 20:04:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:09.580 20:04:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:09.580 20:04:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:09.580 20:04:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:09.580 20:04:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:09.580 20:04:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.580 20:04:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.580 20:04:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:09.580 20:04:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.580 20:04:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:09.580 "name": "Existed_Raid", 00:09:09.580 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:09.580 "strip_size_kb": 0, 00:09:09.580 "state": "configuring", 00:09:09.580 "raid_level": "raid1", 00:09:09.580 "superblock": false, 00:09:09.580 "num_base_bdevs": 3, 00:09:09.580 "num_base_bdevs_discovered": 1, 00:09:09.580 "num_base_bdevs_operational": 3, 00:09:09.580 "base_bdevs_list": [ 00:09:09.580 { 00:09:09.580 "name": "BaseBdev1", 00:09:09.580 "uuid": "5a382a4e-0a08-4c4c-9511-55909af218e9", 00:09:09.580 "is_configured": true, 00:09:09.580 "data_offset": 0, 00:09:09.580 "data_size": 65536 00:09:09.580 }, 00:09:09.580 { 00:09:09.580 "name": "BaseBdev2", 00:09:09.580 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:09.580 "is_configured": false, 00:09:09.580 "data_offset": 0, 00:09:09.580 "data_size": 0 00:09:09.580 }, 00:09:09.580 { 00:09:09.580 "name": "BaseBdev3", 00:09:09.580 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:09.580 "is_configured": false, 00:09:09.580 "data_offset": 0, 00:09:09.580 "data_size": 0 00:09:09.580 } 00:09:09.580 ] 00:09:09.580 }' 00:09:09.580 20:04:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:09.580 20:04:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.150 20:04:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:10.150 20:04:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.150 20:04:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.150 [2024-12-08 20:04:41.937334] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:10.150 BaseBdev2 00:09:10.150 20:04:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.150 20:04:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:10.150 20:04:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:10.150 20:04:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:10.150 20:04:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:10.150 20:04:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:10.150 20:04:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:10.150 20:04:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:10.150 20:04:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.150 20:04:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.150 20:04:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.150 20:04:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:10.150 20:04:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.150 20:04:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.150 [ 00:09:10.150 { 00:09:10.150 "name": "BaseBdev2", 00:09:10.150 "aliases": [ 00:09:10.150 "a6aee7f3-f7c6-4fd4-adfe-811c1939a39e" 00:09:10.150 ], 00:09:10.150 "product_name": "Malloc disk", 00:09:10.150 "block_size": 512, 00:09:10.150 "num_blocks": 65536, 00:09:10.150 "uuid": "a6aee7f3-f7c6-4fd4-adfe-811c1939a39e", 00:09:10.150 "assigned_rate_limits": { 00:09:10.150 "rw_ios_per_sec": 0, 00:09:10.150 "rw_mbytes_per_sec": 0, 00:09:10.150 "r_mbytes_per_sec": 0, 00:09:10.150 "w_mbytes_per_sec": 0 00:09:10.150 }, 00:09:10.150 "claimed": true, 00:09:10.150 "claim_type": "exclusive_write", 00:09:10.150 "zoned": false, 00:09:10.150 "supported_io_types": { 00:09:10.150 "read": true, 00:09:10.150 "write": true, 00:09:10.150 "unmap": true, 00:09:10.150 "flush": true, 00:09:10.150 "reset": true, 00:09:10.150 "nvme_admin": false, 00:09:10.150 "nvme_io": false, 00:09:10.150 "nvme_io_md": false, 00:09:10.150 "write_zeroes": true, 00:09:10.150 "zcopy": true, 00:09:10.150 "get_zone_info": false, 00:09:10.150 "zone_management": false, 00:09:10.150 "zone_append": false, 00:09:10.150 "compare": false, 00:09:10.150 "compare_and_write": false, 00:09:10.150 "abort": true, 00:09:10.150 "seek_hole": false, 00:09:10.150 "seek_data": false, 00:09:10.150 "copy": true, 00:09:10.150 "nvme_iov_md": false 00:09:10.150 }, 00:09:10.150 "memory_domains": [ 00:09:10.150 { 00:09:10.150 "dma_device_id": "system", 00:09:10.150 "dma_device_type": 1 00:09:10.150 }, 00:09:10.150 { 00:09:10.150 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:10.150 "dma_device_type": 2 00:09:10.150 } 00:09:10.150 ], 00:09:10.150 "driver_specific": {} 00:09:10.150 } 00:09:10.150 ] 00:09:10.150 20:04:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.150 20:04:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:10.150 20:04:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:10.150 20:04:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:10.150 20:04:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:10.150 20:04:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:10.150 20:04:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:10.150 20:04:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:10.150 20:04:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:10.150 20:04:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:10.150 20:04:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:10.150 20:04:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:10.150 20:04:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:10.150 20:04:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:10.150 20:04:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:10.150 20:04:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:10.150 20:04:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.150 20:04:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.150 20:04:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.150 20:04:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:10.150 "name": "Existed_Raid", 00:09:10.150 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:10.150 "strip_size_kb": 0, 00:09:10.150 "state": "configuring", 00:09:10.150 "raid_level": "raid1", 00:09:10.150 "superblock": false, 00:09:10.150 "num_base_bdevs": 3, 00:09:10.150 "num_base_bdevs_discovered": 2, 00:09:10.150 "num_base_bdevs_operational": 3, 00:09:10.150 "base_bdevs_list": [ 00:09:10.150 { 00:09:10.150 "name": "BaseBdev1", 00:09:10.150 "uuid": "5a382a4e-0a08-4c4c-9511-55909af218e9", 00:09:10.150 "is_configured": true, 00:09:10.150 "data_offset": 0, 00:09:10.150 "data_size": 65536 00:09:10.150 }, 00:09:10.150 { 00:09:10.150 "name": "BaseBdev2", 00:09:10.150 "uuid": "a6aee7f3-f7c6-4fd4-adfe-811c1939a39e", 00:09:10.150 "is_configured": true, 00:09:10.150 "data_offset": 0, 00:09:10.150 "data_size": 65536 00:09:10.150 }, 00:09:10.150 { 00:09:10.150 "name": "BaseBdev3", 00:09:10.150 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:10.150 "is_configured": false, 00:09:10.150 "data_offset": 0, 00:09:10.150 "data_size": 0 00:09:10.150 } 00:09:10.150 ] 00:09:10.150 }' 00:09:10.150 20:04:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:10.150 20:04:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.718 20:04:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:10.718 20:04:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.718 20:04:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.718 [2024-12-08 20:04:42.447773] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:10.718 [2024-12-08 20:04:42.447830] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:10.718 [2024-12-08 20:04:42.447842] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:09:10.718 [2024-12-08 20:04:42.448149] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:10.718 [2024-12-08 20:04:42.448350] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:10.718 [2024-12-08 20:04:42.448366] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:10.718 [2024-12-08 20:04:42.448685] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:10.718 BaseBdev3 00:09:10.718 20:04:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.718 20:04:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:10.718 20:04:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:10.718 20:04:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:10.718 20:04:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:10.718 20:04:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:10.718 20:04:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:10.718 20:04:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:10.718 20:04:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.718 20:04:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.718 20:04:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.718 20:04:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:10.718 20:04:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.719 20:04:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.719 [ 00:09:10.719 { 00:09:10.719 "name": "BaseBdev3", 00:09:10.719 "aliases": [ 00:09:10.719 "8ce937af-cac3-4dbc-bd7f-f9be4250a8df" 00:09:10.719 ], 00:09:10.719 "product_name": "Malloc disk", 00:09:10.719 "block_size": 512, 00:09:10.719 "num_blocks": 65536, 00:09:10.719 "uuid": "8ce937af-cac3-4dbc-bd7f-f9be4250a8df", 00:09:10.719 "assigned_rate_limits": { 00:09:10.719 "rw_ios_per_sec": 0, 00:09:10.719 "rw_mbytes_per_sec": 0, 00:09:10.719 "r_mbytes_per_sec": 0, 00:09:10.719 "w_mbytes_per_sec": 0 00:09:10.719 }, 00:09:10.719 "claimed": true, 00:09:10.719 "claim_type": "exclusive_write", 00:09:10.719 "zoned": false, 00:09:10.719 "supported_io_types": { 00:09:10.719 "read": true, 00:09:10.719 "write": true, 00:09:10.719 "unmap": true, 00:09:10.719 "flush": true, 00:09:10.719 "reset": true, 00:09:10.719 "nvme_admin": false, 00:09:10.719 "nvme_io": false, 00:09:10.719 "nvme_io_md": false, 00:09:10.719 "write_zeroes": true, 00:09:10.719 "zcopy": true, 00:09:10.719 "get_zone_info": false, 00:09:10.719 "zone_management": false, 00:09:10.719 "zone_append": false, 00:09:10.719 "compare": false, 00:09:10.719 "compare_and_write": false, 00:09:10.719 "abort": true, 00:09:10.719 "seek_hole": false, 00:09:10.719 "seek_data": false, 00:09:10.719 "copy": true, 00:09:10.719 "nvme_iov_md": false 00:09:10.719 }, 00:09:10.719 "memory_domains": [ 00:09:10.719 { 00:09:10.719 "dma_device_id": "system", 00:09:10.719 "dma_device_type": 1 00:09:10.719 }, 00:09:10.719 { 00:09:10.719 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:10.719 "dma_device_type": 2 00:09:10.719 } 00:09:10.719 ], 00:09:10.719 "driver_specific": {} 00:09:10.719 } 00:09:10.719 ] 00:09:10.719 20:04:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.719 20:04:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:10.719 20:04:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:10.719 20:04:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:10.719 20:04:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:10.719 20:04:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:10.719 20:04:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:10.719 20:04:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:10.719 20:04:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:10.719 20:04:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:10.719 20:04:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:10.719 20:04:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:10.719 20:04:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:10.719 20:04:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:10.719 20:04:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:10.719 20:04:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.719 20:04:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:10.719 20:04:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.719 20:04:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.719 20:04:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:10.719 "name": "Existed_Raid", 00:09:10.719 "uuid": "5c27f77d-4fe3-4ef1-ad2a-70abd167b772", 00:09:10.719 "strip_size_kb": 0, 00:09:10.719 "state": "online", 00:09:10.719 "raid_level": "raid1", 00:09:10.719 "superblock": false, 00:09:10.719 "num_base_bdevs": 3, 00:09:10.719 "num_base_bdevs_discovered": 3, 00:09:10.719 "num_base_bdevs_operational": 3, 00:09:10.719 "base_bdevs_list": [ 00:09:10.719 { 00:09:10.719 "name": "BaseBdev1", 00:09:10.719 "uuid": "5a382a4e-0a08-4c4c-9511-55909af218e9", 00:09:10.719 "is_configured": true, 00:09:10.719 "data_offset": 0, 00:09:10.719 "data_size": 65536 00:09:10.719 }, 00:09:10.719 { 00:09:10.719 "name": "BaseBdev2", 00:09:10.719 "uuid": "a6aee7f3-f7c6-4fd4-adfe-811c1939a39e", 00:09:10.719 "is_configured": true, 00:09:10.719 "data_offset": 0, 00:09:10.719 "data_size": 65536 00:09:10.719 }, 00:09:10.719 { 00:09:10.719 "name": "BaseBdev3", 00:09:10.719 "uuid": "8ce937af-cac3-4dbc-bd7f-f9be4250a8df", 00:09:10.719 "is_configured": true, 00:09:10.719 "data_offset": 0, 00:09:10.719 "data_size": 65536 00:09:10.719 } 00:09:10.719 ] 00:09:10.719 }' 00:09:10.719 20:04:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:10.719 20:04:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.979 20:04:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:10.979 20:04:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:10.979 20:04:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:10.979 20:04:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:10.979 20:04:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:10.979 20:04:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:10.979 20:04:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:10.979 20:04:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:10.979 20:04:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.979 20:04:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.979 [2024-12-08 20:04:42.931436] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:10.979 20:04:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.979 20:04:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:10.979 "name": "Existed_Raid", 00:09:10.979 "aliases": [ 00:09:10.979 "5c27f77d-4fe3-4ef1-ad2a-70abd167b772" 00:09:10.979 ], 00:09:10.979 "product_name": "Raid Volume", 00:09:10.979 "block_size": 512, 00:09:10.979 "num_blocks": 65536, 00:09:10.979 "uuid": "5c27f77d-4fe3-4ef1-ad2a-70abd167b772", 00:09:10.979 "assigned_rate_limits": { 00:09:10.979 "rw_ios_per_sec": 0, 00:09:10.979 "rw_mbytes_per_sec": 0, 00:09:10.979 "r_mbytes_per_sec": 0, 00:09:10.979 "w_mbytes_per_sec": 0 00:09:10.979 }, 00:09:10.979 "claimed": false, 00:09:10.979 "zoned": false, 00:09:10.979 "supported_io_types": { 00:09:10.979 "read": true, 00:09:10.979 "write": true, 00:09:10.979 "unmap": false, 00:09:10.979 "flush": false, 00:09:10.979 "reset": true, 00:09:10.979 "nvme_admin": false, 00:09:10.979 "nvme_io": false, 00:09:10.979 "nvme_io_md": false, 00:09:10.979 "write_zeroes": true, 00:09:10.979 "zcopy": false, 00:09:10.979 "get_zone_info": false, 00:09:10.979 "zone_management": false, 00:09:10.979 "zone_append": false, 00:09:10.979 "compare": false, 00:09:10.979 "compare_and_write": false, 00:09:10.979 "abort": false, 00:09:10.979 "seek_hole": false, 00:09:10.979 "seek_data": false, 00:09:10.979 "copy": false, 00:09:10.979 "nvme_iov_md": false 00:09:10.979 }, 00:09:10.979 "memory_domains": [ 00:09:10.979 { 00:09:10.979 "dma_device_id": "system", 00:09:10.979 "dma_device_type": 1 00:09:10.979 }, 00:09:10.979 { 00:09:10.979 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:10.979 "dma_device_type": 2 00:09:10.979 }, 00:09:10.979 { 00:09:10.979 "dma_device_id": "system", 00:09:10.979 "dma_device_type": 1 00:09:10.979 }, 00:09:10.979 { 00:09:10.979 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:10.979 "dma_device_type": 2 00:09:10.979 }, 00:09:10.979 { 00:09:10.979 "dma_device_id": "system", 00:09:10.979 "dma_device_type": 1 00:09:10.979 }, 00:09:10.979 { 00:09:10.979 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:10.979 "dma_device_type": 2 00:09:10.979 } 00:09:10.979 ], 00:09:10.979 "driver_specific": { 00:09:10.979 "raid": { 00:09:10.979 "uuid": "5c27f77d-4fe3-4ef1-ad2a-70abd167b772", 00:09:10.979 "strip_size_kb": 0, 00:09:10.979 "state": "online", 00:09:10.979 "raid_level": "raid1", 00:09:10.979 "superblock": false, 00:09:10.979 "num_base_bdevs": 3, 00:09:10.979 "num_base_bdevs_discovered": 3, 00:09:10.979 "num_base_bdevs_operational": 3, 00:09:10.979 "base_bdevs_list": [ 00:09:10.979 { 00:09:10.979 "name": "BaseBdev1", 00:09:10.979 "uuid": "5a382a4e-0a08-4c4c-9511-55909af218e9", 00:09:10.979 "is_configured": true, 00:09:10.979 "data_offset": 0, 00:09:10.979 "data_size": 65536 00:09:10.979 }, 00:09:10.979 { 00:09:10.979 "name": "BaseBdev2", 00:09:10.979 "uuid": "a6aee7f3-f7c6-4fd4-adfe-811c1939a39e", 00:09:10.979 "is_configured": true, 00:09:10.979 "data_offset": 0, 00:09:10.979 "data_size": 65536 00:09:10.979 }, 00:09:10.979 { 00:09:10.979 "name": "BaseBdev3", 00:09:10.979 "uuid": "8ce937af-cac3-4dbc-bd7f-f9be4250a8df", 00:09:10.979 "is_configured": true, 00:09:10.979 "data_offset": 0, 00:09:10.979 "data_size": 65536 00:09:10.979 } 00:09:10.979 ] 00:09:10.979 } 00:09:10.979 } 00:09:10.979 }' 00:09:11.239 20:04:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:11.239 20:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:11.239 BaseBdev2 00:09:11.239 BaseBdev3' 00:09:11.239 20:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:11.239 20:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:11.239 20:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:11.239 20:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:11.239 20:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:11.239 20:04:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.239 20:04:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.239 20:04:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.239 20:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:11.239 20:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:11.239 20:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:11.239 20:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:11.239 20:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:11.239 20:04:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.239 20:04:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.239 20:04:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.239 20:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:11.239 20:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:11.239 20:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:11.239 20:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:11.239 20:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:11.239 20:04:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.239 20:04:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.239 20:04:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.239 20:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:11.239 20:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:11.239 20:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:11.239 20:04:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.239 20:04:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.239 [2024-12-08 20:04:43.186689] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:11.499 20:04:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.499 20:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:11.499 20:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:09:11.499 20:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:11.499 20:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:11.499 20:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:09:11.499 20:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:09:11.499 20:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:11.499 20:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:11.499 20:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:11.499 20:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:11.499 20:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:11.499 20:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:11.499 20:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:11.499 20:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:11.499 20:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:11.499 20:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.499 20:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:11.499 20:04:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.499 20:04:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.499 20:04:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.499 20:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:11.499 "name": "Existed_Raid", 00:09:11.499 "uuid": "5c27f77d-4fe3-4ef1-ad2a-70abd167b772", 00:09:11.499 "strip_size_kb": 0, 00:09:11.499 "state": "online", 00:09:11.499 "raid_level": "raid1", 00:09:11.499 "superblock": false, 00:09:11.499 "num_base_bdevs": 3, 00:09:11.499 "num_base_bdevs_discovered": 2, 00:09:11.499 "num_base_bdevs_operational": 2, 00:09:11.499 "base_bdevs_list": [ 00:09:11.499 { 00:09:11.499 "name": null, 00:09:11.499 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:11.499 "is_configured": false, 00:09:11.499 "data_offset": 0, 00:09:11.499 "data_size": 65536 00:09:11.499 }, 00:09:11.499 { 00:09:11.499 "name": "BaseBdev2", 00:09:11.499 "uuid": "a6aee7f3-f7c6-4fd4-adfe-811c1939a39e", 00:09:11.499 "is_configured": true, 00:09:11.499 "data_offset": 0, 00:09:11.499 "data_size": 65536 00:09:11.499 }, 00:09:11.499 { 00:09:11.499 "name": "BaseBdev3", 00:09:11.499 "uuid": "8ce937af-cac3-4dbc-bd7f-f9be4250a8df", 00:09:11.499 "is_configured": true, 00:09:11.499 "data_offset": 0, 00:09:11.499 "data_size": 65536 00:09:11.499 } 00:09:11.499 ] 00:09:11.499 }' 00:09:11.499 20:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:11.499 20:04:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.758 20:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:11.758 20:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:12.017 20:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:12.017 20:04:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.017 20:04:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.017 20:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:12.017 20:04:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.017 20:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:12.017 20:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:12.017 20:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:12.017 20:04:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.017 20:04:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.017 [2024-12-08 20:04:43.785014] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:12.017 20:04:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.017 20:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:12.017 20:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:12.017 20:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:12.017 20:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:12.017 20:04:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.017 20:04:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.017 20:04:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.017 20:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:12.017 20:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:12.017 20:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:12.017 20:04:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.017 20:04:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.017 [2024-12-08 20:04:43.936641] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:12.017 [2024-12-08 20:04:43.936770] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:12.278 [2024-12-08 20:04:44.030840] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:12.278 [2024-12-08 20:04:44.030908] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:12.278 [2024-12-08 20:04:44.030919] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:12.278 20:04:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.278 20:04:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:12.278 20:04:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:12.278 20:04:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:12.278 20:04:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.278 20:04:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:12.278 20:04:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.278 20:04:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.278 20:04:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:12.278 20:04:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:12.278 20:04:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:12.278 20:04:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:12.278 20:04:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:12.278 20:04:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:12.278 20:04:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.278 20:04:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.278 BaseBdev2 00:09:12.278 20:04:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.278 20:04:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:12.279 20:04:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:12.279 20:04:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:12.279 20:04:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:12.279 20:04:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:12.279 20:04:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:12.279 20:04:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:12.279 20:04:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.279 20:04:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.279 20:04:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.279 20:04:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:12.279 20:04:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.279 20:04:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.279 [ 00:09:12.279 { 00:09:12.279 "name": "BaseBdev2", 00:09:12.279 "aliases": [ 00:09:12.279 "53d0cc5b-7128-49fe-aa4d-729b178d7fd9" 00:09:12.279 ], 00:09:12.279 "product_name": "Malloc disk", 00:09:12.279 "block_size": 512, 00:09:12.279 "num_blocks": 65536, 00:09:12.279 "uuid": "53d0cc5b-7128-49fe-aa4d-729b178d7fd9", 00:09:12.279 "assigned_rate_limits": { 00:09:12.279 "rw_ios_per_sec": 0, 00:09:12.279 "rw_mbytes_per_sec": 0, 00:09:12.279 "r_mbytes_per_sec": 0, 00:09:12.279 "w_mbytes_per_sec": 0 00:09:12.279 }, 00:09:12.279 "claimed": false, 00:09:12.279 "zoned": false, 00:09:12.279 "supported_io_types": { 00:09:12.279 "read": true, 00:09:12.279 "write": true, 00:09:12.279 "unmap": true, 00:09:12.279 "flush": true, 00:09:12.279 "reset": true, 00:09:12.279 "nvme_admin": false, 00:09:12.279 "nvme_io": false, 00:09:12.279 "nvme_io_md": false, 00:09:12.279 "write_zeroes": true, 00:09:12.279 "zcopy": true, 00:09:12.279 "get_zone_info": false, 00:09:12.279 "zone_management": false, 00:09:12.279 "zone_append": false, 00:09:12.279 "compare": false, 00:09:12.279 "compare_and_write": false, 00:09:12.279 "abort": true, 00:09:12.279 "seek_hole": false, 00:09:12.279 "seek_data": false, 00:09:12.279 "copy": true, 00:09:12.279 "nvme_iov_md": false 00:09:12.279 }, 00:09:12.279 "memory_domains": [ 00:09:12.279 { 00:09:12.279 "dma_device_id": "system", 00:09:12.279 "dma_device_type": 1 00:09:12.279 }, 00:09:12.279 { 00:09:12.279 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:12.279 "dma_device_type": 2 00:09:12.279 } 00:09:12.279 ], 00:09:12.279 "driver_specific": {} 00:09:12.279 } 00:09:12.279 ] 00:09:12.279 20:04:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.279 20:04:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:12.279 20:04:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:12.279 20:04:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:12.279 20:04:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:12.279 20:04:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.279 20:04:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.279 BaseBdev3 00:09:12.279 20:04:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.279 20:04:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:12.279 20:04:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:12.279 20:04:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:12.279 20:04:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:12.279 20:04:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:12.279 20:04:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:12.279 20:04:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:12.279 20:04:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.279 20:04:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.279 20:04:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.279 20:04:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:12.279 20:04:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.279 20:04:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.279 [ 00:09:12.279 { 00:09:12.279 "name": "BaseBdev3", 00:09:12.279 "aliases": [ 00:09:12.279 "9dd0b4a9-9202-40d3-88c1-e7938131af95" 00:09:12.279 ], 00:09:12.279 "product_name": "Malloc disk", 00:09:12.279 "block_size": 512, 00:09:12.279 "num_blocks": 65536, 00:09:12.279 "uuid": "9dd0b4a9-9202-40d3-88c1-e7938131af95", 00:09:12.279 "assigned_rate_limits": { 00:09:12.279 "rw_ios_per_sec": 0, 00:09:12.279 "rw_mbytes_per_sec": 0, 00:09:12.279 "r_mbytes_per_sec": 0, 00:09:12.279 "w_mbytes_per_sec": 0 00:09:12.279 }, 00:09:12.279 "claimed": false, 00:09:12.279 "zoned": false, 00:09:12.279 "supported_io_types": { 00:09:12.279 "read": true, 00:09:12.279 "write": true, 00:09:12.279 "unmap": true, 00:09:12.279 "flush": true, 00:09:12.279 "reset": true, 00:09:12.279 "nvme_admin": false, 00:09:12.279 "nvme_io": false, 00:09:12.279 "nvme_io_md": false, 00:09:12.279 "write_zeroes": true, 00:09:12.279 "zcopy": true, 00:09:12.279 "get_zone_info": false, 00:09:12.279 "zone_management": false, 00:09:12.279 "zone_append": false, 00:09:12.279 "compare": false, 00:09:12.279 "compare_and_write": false, 00:09:12.279 "abort": true, 00:09:12.279 "seek_hole": false, 00:09:12.279 "seek_data": false, 00:09:12.279 "copy": true, 00:09:12.279 "nvme_iov_md": false 00:09:12.279 }, 00:09:12.279 "memory_domains": [ 00:09:12.279 { 00:09:12.279 "dma_device_id": "system", 00:09:12.279 "dma_device_type": 1 00:09:12.279 }, 00:09:12.279 { 00:09:12.279 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:12.279 "dma_device_type": 2 00:09:12.279 } 00:09:12.279 ], 00:09:12.279 "driver_specific": {} 00:09:12.279 } 00:09:12.279 ] 00:09:12.279 20:04:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.279 20:04:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:12.279 20:04:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:12.279 20:04:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:12.279 20:04:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:12.279 20:04:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.279 20:04:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.279 [2024-12-08 20:04:44.243921] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:12.279 [2024-12-08 20:04:44.243980] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:12.279 [2024-12-08 20:04:44.244000] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:12.279 [2024-12-08 20:04:44.245868] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:12.279 20:04:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.279 20:04:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:12.279 20:04:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:12.279 20:04:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:12.279 20:04:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:12.279 20:04:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:12.279 20:04:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:12.279 20:04:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:12.279 20:04:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:12.279 20:04:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:12.279 20:04:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:12.539 20:04:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:12.539 20:04:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:12.539 20:04:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.539 20:04:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.539 20:04:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.539 20:04:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:12.539 "name": "Existed_Raid", 00:09:12.539 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:12.539 "strip_size_kb": 0, 00:09:12.539 "state": "configuring", 00:09:12.539 "raid_level": "raid1", 00:09:12.539 "superblock": false, 00:09:12.539 "num_base_bdevs": 3, 00:09:12.539 "num_base_bdevs_discovered": 2, 00:09:12.539 "num_base_bdevs_operational": 3, 00:09:12.539 "base_bdevs_list": [ 00:09:12.539 { 00:09:12.539 "name": "BaseBdev1", 00:09:12.539 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:12.539 "is_configured": false, 00:09:12.539 "data_offset": 0, 00:09:12.539 "data_size": 0 00:09:12.539 }, 00:09:12.539 { 00:09:12.539 "name": "BaseBdev2", 00:09:12.539 "uuid": "53d0cc5b-7128-49fe-aa4d-729b178d7fd9", 00:09:12.539 "is_configured": true, 00:09:12.539 "data_offset": 0, 00:09:12.539 "data_size": 65536 00:09:12.539 }, 00:09:12.539 { 00:09:12.539 "name": "BaseBdev3", 00:09:12.539 "uuid": "9dd0b4a9-9202-40d3-88c1-e7938131af95", 00:09:12.539 "is_configured": true, 00:09:12.539 "data_offset": 0, 00:09:12.539 "data_size": 65536 00:09:12.539 } 00:09:12.539 ] 00:09:12.539 }' 00:09:12.539 20:04:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:12.539 20:04:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.798 20:04:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:12.798 20:04:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.798 20:04:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.798 [2024-12-08 20:04:44.723195] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:12.798 20:04:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.798 20:04:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:12.798 20:04:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:12.798 20:04:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:12.798 20:04:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:12.798 20:04:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:12.798 20:04:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:12.798 20:04:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:12.798 20:04:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:12.798 20:04:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:12.798 20:04:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:12.798 20:04:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:12.798 20:04:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.798 20:04:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.798 20:04:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:12.798 20:04:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.057 20:04:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:13.057 "name": "Existed_Raid", 00:09:13.057 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:13.058 "strip_size_kb": 0, 00:09:13.058 "state": "configuring", 00:09:13.058 "raid_level": "raid1", 00:09:13.058 "superblock": false, 00:09:13.058 "num_base_bdevs": 3, 00:09:13.058 "num_base_bdevs_discovered": 1, 00:09:13.058 "num_base_bdevs_operational": 3, 00:09:13.058 "base_bdevs_list": [ 00:09:13.058 { 00:09:13.058 "name": "BaseBdev1", 00:09:13.058 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:13.058 "is_configured": false, 00:09:13.058 "data_offset": 0, 00:09:13.058 "data_size": 0 00:09:13.058 }, 00:09:13.058 { 00:09:13.058 "name": null, 00:09:13.058 "uuid": "53d0cc5b-7128-49fe-aa4d-729b178d7fd9", 00:09:13.058 "is_configured": false, 00:09:13.058 "data_offset": 0, 00:09:13.058 "data_size": 65536 00:09:13.058 }, 00:09:13.058 { 00:09:13.058 "name": "BaseBdev3", 00:09:13.058 "uuid": "9dd0b4a9-9202-40d3-88c1-e7938131af95", 00:09:13.058 "is_configured": true, 00:09:13.058 "data_offset": 0, 00:09:13.058 "data_size": 65536 00:09:13.058 } 00:09:13.058 ] 00:09:13.058 }' 00:09:13.058 20:04:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:13.058 20:04:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.326 20:04:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:13.326 20:04:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.326 20:04:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.326 20:04:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:13.326 20:04:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.326 20:04:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:13.326 20:04:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:13.326 20:04:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.326 20:04:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.326 [2024-12-08 20:04:45.270946] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:13.326 BaseBdev1 00:09:13.326 20:04:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.326 20:04:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:13.326 20:04:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:13.326 20:04:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:13.326 20:04:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:13.326 20:04:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:13.326 20:04:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:13.326 20:04:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:13.326 20:04:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.326 20:04:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.326 20:04:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.326 20:04:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:13.326 20:04:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.326 20:04:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.326 [ 00:09:13.326 { 00:09:13.326 "name": "BaseBdev1", 00:09:13.326 "aliases": [ 00:09:13.326 "283f4c3b-3991-4d8d-b995-3278cb0196ab" 00:09:13.326 ], 00:09:13.326 "product_name": "Malloc disk", 00:09:13.326 "block_size": 512, 00:09:13.326 "num_blocks": 65536, 00:09:13.327 "uuid": "283f4c3b-3991-4d8d-b995-3278cb0196ab", 00:09:13.327 "assigned_rate_limits": { 00:09:13.327 "rw_ios_per_sec": 0, 00:09:13.327 "rw_mbytes_per_sec": 0, 00:09:13.327 "r_mbytes_per_sec": 0, 00:09:13.327 "w_mbytes_per_sec": 0 00:09:13.327 }, 00:09:13.327 "claimed": true, 00:09:13.327 "claim_type": "exclusive_write", 00:09:13.327 "zoned": false, 00:09:13.327 "supported_io_types": { 00:09:13.327 "read": true, 00:09:13.327 "write": true, 00:09:13.327 "unmap": true, 00:09:13.327 "flush": true, 00:09:13.327 "reset": true, 00:09:13.327 "nvme_admin": false, 00:09:13.327 "nvme_io": false, 00:09:13.327 "nvme_io_md": false, 00:09:13.327 "write_zeroes": true, 00:09:13.327 "zcopy": true, 00:09:13.327 "get_zone_info": false, 00:09:13.327 "zone_management": false, 00:09:13.327 "zone_append": false, 00:09:13.327 "compare": false, 00:09:13.327 "compare_and_write": false, 00:09:13.327 "abort": true, 00:09:13.327 "seek_hole": false, 00:09:13.327 "seek_data": false, 00:09:13.327 "copy": true, 00:09:13.327 "nvme_iov_md": false 00:09:13.327 }, 00:09:13.327 "memory_domains": [ 00:09:13.327 { 00:09:13.327 "dma_device_id": "system", 00:09:13.327 "dma_device_type": 1 00:09:13.327 }, 00:09:13.327 { 00:09:13.327 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:13.327 "dma_device_type": 2 00:09:13.327 } 00:09:13.327 ], 00:09:13.327 "driver_specific": {} 00:09:13.327 } 00:09:13.327 ] 00:09:13.327 20:04:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.327 20:04:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:13.327 20:04:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:13.327 20:04:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:13.327 20:04:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:13.327 20:04:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:13.327 20:04:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:13.327 20:04:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:13.327 20:04:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:13.591 20:04:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:13.591 20:04:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:13.591 20:04:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:13.591 20:04:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:13.591 20:04:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:13.591 20:04:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.591 20:04:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.591 20:04:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.591 20:04:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:13.591 "name": "Existed_Raid", 00:09:13.591 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:13.591 "strip_size_kb": 0, 00:09:13.591 "state": "configuring", 00:09:13.591 "raid_level": "raid1", 00:09:13.591 "superblock": false, 00:09:13.591 "num_base_bdevs": 3, 00:09:13.591 "num_base_bdevs_discovered": 2, 00:09:13.591 "num_base_bdevs_operational": 3, 00:09:13.591 "base_bdevs_list": [ 00:09:13.591 { 00:09:13.591 "name": "BaseBdev1", 00:09:13.591 "uuid": "283f4c3b-3991-4d8d-b995-3278cb0196ab", 00:09:13.591 "is_configured": true, 00:09:13.591 "data_offset": 0, 00:09:13.591 "data_size": 65536 00:09:13.592 }, 00:09:13.592 { 00:09:13.592 "name": null, 00:09:13.592 "uuid": "53d0cc5b-7128-49fe-aa4d-729b178d7fd9", 00:09:13.592 "is_configured": false, 00:09:13.592 "data_offset": 0, 00:09:13.592 "data_size": 65536 00:09:13.592 }, 00:09:13.592 { 00:09:13.592 "name": "BaseBdev3", 00:09:13.592 "uuid": "9dd0b4a9-9202-40d3-88c1-e7938131af95", 00:09:13.592 "is_configured": true, 00:09:13.592 "data_offset": 0, 00:09:13.592 "data_size": 65536 00:09:13.592 } 00:09:13.592 ] 00:09:13.592 }' 00:09:13.592 20:04:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:13.592 20:04:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.851 20:04:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:13.851 20:04:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:13.851 20:04:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.851 20:04:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.851 20:04:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.851 20:04:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:13.851 20:04:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:13.851 20:04:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.851 20:04:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.851 [2024-12-08 20:04:45.770172] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:13.851 20:04:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.851 20:04:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:13.851 20:04:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:13.851 20:04:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:13.851 20:04:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:13.851 20:04:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:13.851 20:04:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:13.851 20:04:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:13.851 20:04:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:13.851 20:04:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:13.851 20:04:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:13.851 20:04:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:13.851 20:04:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.851 20:04:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.851 20:04:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:13.851 20:04:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.851 20:04:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:13.851 "name": "Existed_Raid", 00:09:13.851 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:13.851 "strip_size_kb": 0, 00:09:13.851 "state": "configuring", 00:09:13.851 "raid_level": "raid1", 00:09:13.851 "superblock": false, 00:09:13.851 "num_base_bdevs": 3, 00:09:13.851 "num_base_bdevs_discovered": 1, 00:09:13.851 "num_base_bdevs_operational": 3, 00:09:13.851 "base_bdevs_list": [ 00:09:13.851 { 00:09:13.851 "name": "BaseBdev1", 00:09:13.851 "uuid": "283f4c3b-3991-4d8d-b995-3278cb0196ab", 00:09:13.851 "is_configured": true, 00:09:13.851 "data_offset": 0, 00:09:13.851 "data_size": 65536 00:09:13.851 }, 00:09:13.851 { 00:09:13.851 "name": null, 00:09:13.851 "uuid": "53d0cc5b-7128-49fe-aa4d-729b178d7fd9", 00:09:13.851 "is_configured": false, 00:09:13.851 "data_offset": 0, 00:09:13.851 "data_size": 65536 00:09:13.851 }, 00:09:13.851 { 00:09:13.851 "name": null, 00:09:13.851 "uuid": "9dd0b4a9-9202-40d3-88c1-e7938131af95", 00:09:13.851 "is_configured": false, 00:09:13.851 "data_offset": 0, 00:09:13.851 "data_size": 65536 00:09:13.851 } 00:09:13.851 ] 00:09:13.851 }' 00:09:13.851 20:04:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:13.851 20:04:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.474 20:04:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:14.475 20:04:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.475 20:04:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.475 20:04:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:14.475 20:04:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.475 20:04:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:14.475 20:04:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:14.475 20:04:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.475 20:04:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.475 [2024-12-08 20:04:46.233416] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:14.475 20:04:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.475 20:04:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:14.475 20:04:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:14.475 20:04:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:14.475 20:04:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:14.475 20:04:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:14.475 20:04:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:14.475 20:04:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:14.475 20:04:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:14.475 20:04:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:14.475 20:04:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:14.475 20:04:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:14.475 20:04:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:14.475 20:04:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.475 20:04:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.475 20:04:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.475 20:04:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:14.475 "name": "Existed_Raid", 00:09:14.475 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:14.475 "strip_size_kb": 0, 00:09:14.475 "state": "configuring", 00:09:14.475 "raid_level": "raid1", 00:09:14.475 "superblock": false, 00:09:14.475 "num_base_bdevs": 3, 00:09:14.475 "num_base_bdevs_discovered": 2, 00:09:14.475 "num_base_bdevs_operational": 3, 00:09:14.475 "base_bdevs_list": [ 00:09:14.475 { 00:09:14.475 "name": "BaseBdev1", 00:09:14.475 "uuid": "283f4c3b-3991-4d8d-b995-3278cb0196ab", 00:09:14.475 "is_configured": true, 00:09:14.475 "data_offset": 0, 00:09:14.475 "data_size": 65536 00:09:14.475 }, 00:09:14.475 { 00:09:14.475 "name": null, 00:09:14.475 "uuid": "53d0cc5b-7128-49fe-aa4d-729b178d7fd9", 00:09:14.475 "is_configured": false, 00:09:14.475 "data_offset": 0, 00:09:14.475 "data_size": 65536 00:09:14.475 }, 00:09:14.475 { 00:09:14.475 "name": "BaseBdev3", 00:09:14.475 "uuid": "9dd0b4a9-9202-40d3-88c1-e7938131af95", 00:09:14.475 "is_configured": true, 00:09:14.475 "data_offset": 0, 00:09:14.475 "data_size": 65536 00:09:14.475 } 00:09:14.475 ] 00:09:14.475 }' 00:09:14.475 20:04:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:14.475 20:04:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.749 20:04:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:14.749 20:04:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:14.749 20:04:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.749 20:04:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.749 20:04:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.749 20:04:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:14.749 20:04:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:14.749 20:04:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.749 20:04:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.749 [2024-12-08 20:04:46.700600] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:15.009 20:04:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.009 20:04:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:15.009 20:04:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:15.009 20:04:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:15.009 20:04:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:15.009 20:04:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:15.009 20:04:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:15.009 20:04:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:15.009 20:04:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:15.009 20:04:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:15.009 20:04:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:15.009 20:04:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.009 20:04:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.009 20:04:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:15.009 20:04:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.009 20:04:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.009 20:04:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:15.009 "name": "Existed_Raid", 00:09:15.009 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:15.009 "strip_size_kb": 0, 00:09:15.009 "state": "configuring", 00:09:15.009 "raid_level": "raid1", 00:09:15.009 "superblock": false, 00:09:15.009 "num_base_bdevs": 3, 00:09:15.009 "num_base_bdevs_discovered": 1, 00:09:15.009 "num_base_bdevs_operational": 3, 00:09:15.009 "base_bdevs_list": [ 00:09:15.009 { 00:09:15.009 "name": null, 00:09:15.009 "uuid": "283f4c3b-3991-4d8d-b995-3278cb0196ab", 00:09:15.009 "is_configured": false, 00:09:15.009 "data_offset": 0, 00:09:15.009 "data_size": 65536 00:09:15.009 }, 00:09:15.009 { 00:09:15.009 "name": null, 00:09:15.009 "uuid": "53d0cc5b-7128-49fe-aa4d-729b178d7fd9", 00:09:15.009 "is_configured": false, 00:09:15.009 "data_offset": 0, 00:09:15.009 "data_size": 65536 00:09:15.009 }, 00:09:15.009 { 00:09:15.009 "name": "BaseBdev3", 00:09:15.009 "uuid": "9dd0b4a9-9202-40d3-88c1-e7938131af95", 00:09:15.009 "is_configured": true, 00:09:15.009 "data_offset": 0, 00:09:15.009 "data_size": 65536 00:09:15.009 } 00:09:15.009 ] 00:09:15.009 }' 00:09:15.009 20:04:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:15.009 20:04:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.275 20:04:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.275 20:04:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.275 20:04:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.275 20:04:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:15.538 20:04:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.538 20:04:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:15.538 20:04:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:15.538 20:04:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.538 20:04:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.538 [2024-12-08 20:04:47.302985] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:15.538 20:04:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.538 20:04:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:15.538 20:04:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:15.538 20:04:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:15.538 20:04:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:15.538 20:04:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:15.538 20:04:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:15.538 20:04:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:15.538 20:04:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:15.538 20:04:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:15.538 20:04:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:15.538 20:04:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.538 20:04:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.538 20:04:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:15.538 20:04:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.538 20:04:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.538 20:04:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:15.538 "name": "Existed_Raid", 00:09:15.538 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:15.538 "strip_size_kb": 0, 00:09:15.538 "state": "configuring", 00:09:15.538 "raid_level": "raid1", 00:09:15.538 "superblock": false, 00:09:15.538 "num_base_bdevs": 3, 00:09:15.538 "num_base_bdevs_discovered": 2, 00:09:15.538 "num_base_bdevs_operational": 3, 00:09:15.538 "base_bdevs_list": [ 00:09:15.538 { 00:09:15.538 "name": null, 00:09:15.538 "uuid": "283f4c3b-3991-4d8d-b995-3278cb0196ab", 00:09:15.538 "is_configured": false, 00:09:15.538 "data_offset": 0, 00:09:15.538 "data_size": 65536 00:09:15.538 }, 00:09:15.538 { 00:09:15.538 "name": "BaseBdev2", 00:09:15.538 "uuid": "53d0cc5b-7128-49fe-aa4d-729b178d7fd9", 00:09:15.538 "is_configured": true, 00:09:15.538 "data_offset": 0, 00:09:15.538 "data_size": 65536 00:09:15.538 }, 00:09:15.538 { 00:09:15.538 "name": "BaseBdev3", 00:09:15.538 "uuid": "9dd0b4a9-9202-40d3-88c1-e7938131af95", 00:09:15.538 "is_configured": true, 00:09:15.538 "data_offset": 0, 00:09:15.538 "data_size": 65536 00:09:15.538 } 00:09:15.538 ] 00:09:15.538 }' 00:09:15.538 20:04:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:15.538 20:04:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.107 20:04:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:16.107 20:04:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:16.107 20:04:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.107 20:04:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.107 20:04:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.107 20:04:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:16.107 20:04:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:16.107 20:04:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:16.107 20:04:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.108 20:04:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.108 20:04:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.108 20:04:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 283f4c3b-3991-4d8d-b995-3278cb0196ab 00:09:16.108 20:04:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.108 20:04:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.108 [2024-12-08 20:04:47.946952] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:16.108 [2024-12-08 20:04:47.947066] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:16.108 [2024-12-08 20:04:47.947075] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:09:16.108 [2024-12-08 20:04:47.947376] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:16.108 [2024-12-08 20:04:47.947603] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:16.108 [2024-12-08 20:04:47.947624] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:09:16.108 [2024-12-08 20:04:47.947887] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:16.108 NewBaseBdev 00:09:16.108 20:04:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.108 20:04:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:16.108 20:04:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:09:16.108 20:04:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:16.108 20:04:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:16.108 20:04:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:16.108 20:04:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:16.108 20:04:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:16.108 20:04:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.108 20:04:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.108 20:04:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.108 20:04:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:16.108 20:04:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.108 20:04:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.108 [ 00:09:16.108 { 00:09:16.108 "name": "NewBaseBdev", 00:09:16.108 "aliases": [ 00:09:16.108 "283f4c3b-3991-4d8d-b995-3278cb0196ab" 00:09:16.108 ], 00:09:16.108 "product_name": "Malloc disk", 00:09:16.108 "block_size": 512, 00:09:16.108 "num_blocks": 65536, 00:09:16.108 "uuid": "283f4c3b-3991-4d8d-b995-3278cb0196ab", 00:09:16.108 "assigned_rate_limits": { 00:09:16.108 "rw_ios_per_sec": 0, 00:09:16.108 "rw_mbytes_per_sec": 0, 00:09:16.108 "r_mbytes_per_sec": 0, 00:09:16.108 "w_mbytes_per_sec": 0 00:09:16.108 }, 00:09:16.108 "claimed": true, 00:09:16.108 "claim_type": "exclusive_write", 00:09:16.108 "zoned": false, 00:09:16.108 "supported_io_types": { 00:09:16.108 "read": true, 00:09:16.108 "write": true, 00:09:16.108 "unmap": true, 00:09:16.108 "flush": true, 00:09:16.108 "reset": true, 00:09:16.108 "nvme_admin": false, 00:09:16.108 "nvme_io": false, 00:09:16.108 "nvme_io_md": false, 00:09:16.108 "write_zeroes": true, 00:09:16.108 "zcopy": true, 00:09:16.108 "get_zone_info": false, 00:09:16.108 "zone_management": false, 00:09:16.108 "zone_append": false, 00:09:16.108 "compare": false, 00:09:16.108 "compare_and_write": false, 00:09:16.108 "abort": true, 00:09:16.108 "seek_hole": false, 00:09:16.108 "seek_data": false, 00:09:16.108 "copy": true, 00:09:16.108 "nvme_iov_md": false 00:09:16.108 }, 00:09:16.108 "memory_domains": [ 00:09:16.108 { 00:09:16.108 "dma_device_id": "system", 00:09:16.108 "dma_device_type": 1 00:09:16.108 }, 00:09:16.108 { 00:09:16.108 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:16.108 "dma_device_type": 2 00:09:16.108 } 00:09:16.108 ], 00:09:16.108 "driver_specific": {} 00:09:16.108 } 00:09:16.108 ] 00:09:16.108 20:04:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.108 20:04:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:16.108 20:04:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:16.108 20:04:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:16.108 20:04:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:16.108 20:04:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:16.108 20:04:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:16.108 20:04:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:16.108 20:04:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:16.108 20:04:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:16.108 20:04:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:16.108 20:04:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:16.108 20:04:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:16.108 20:04:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:16.108 20:04:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.108 20:04:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.108 20:04:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.108 20:04:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:16.108 "name": "Existed_Raid", 00:09:16.108 "uuid": "a892f2c6-220e-4c4b-8a94-f601b75e2a88", 00:09:16.108 "strip_size_kb": 0, 00:09:16.108 "state": "online", 00:09:16.108 "raid_level": "raid1", 00:09:16.108 "superblock": false, 00:09:16.108 "num_base_bdevs": 3, 00:09:16.108 "num_base_bdevs_discovered": 3, 00:09:16.108 "num_base_bdevs_operational": 3, 00:09:16.108 "base_bdevs_list": [ 00:09:16.108 { 00:09:16.108 "name": "NewBaseBdev", 00:09:16.108 "uuid": "283f4c3b-3991-4d8d-b995-3278cb0196ab", 00:09:16.108 "is_configured": true, 00:09:16.108 "data_offset": 0, 00:09:16.108 "data_size": 65536 00:09:16.108 }, 00:09:16.108 { 00:09:16.108 "name": "BaseBdev2", 00:09:16.108 "uuid": "53d0cc5b-7128-49fe-aa4d-729b178d7fd9", 00:09:16.108 "is_configured": true, 00:09:16.108 "data_offset": 0, 00:09:16.108 "data_size": 65536 00:09:16.108 }, 00:09:16.108 { 00:09:16.108 "name": "BaseBdev3", 00:09:16.108 "uuid": "9dd0b4a9-9202-40d3-88c1-e7938131af95", 00:09:16.108 "is_configured": true, 00:09:16.108 "data_offset": 0, 00:09:16.108 "data_size": 65536 00:09:16.108 } 00:09:16.108 ] 00:09:16.108 }' 00:09:16.108 20:04:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:16.108 20:04:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.678 20:04:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:16.678 20:04:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:16.678 20:04:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:16.678 20:04:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:16.678 20:04:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:16.678 20:04:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:16.678 20:04:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:16.678 20:04:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.678 20:04:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.678 20:04:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:16.678 [2024-12-08 20:04:48.402546] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:16.678 20:04:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.678 20:04:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:16.678 "name": "Existed_Raid", 00:09:16.678 "aliases": [ 00:09:16.678 "a892f2c6-220e-4c4b-8a94-f601b75e2a88" 00:09:16.678 ], 00:09:16.678 "product_name": "Raid Volume", 00:09:16.678 "block_size": 512, 00:09:16.678 "num_blocks": 65536, 00:09:16.678 "uuid": "a892f2c6-220e-4c4b-8a94-f601b75e2a88", 00:09:16.678 "assigned_rate_limits": { 00:09:16.678 "rw_ios_per_sec": 0, 00:09:16.678 "rw_mbytes_per_sec": 0, 00:09:16.678 "r_mbytes_per_sec": 0, 00:09:16.679 "w_mbytes_per_sec": 0 00:09:16.679 }, 00:09:16.679 "claimed": false, 00:09:16.679 "zoned": false, 00:09:16.679 "supported_io_types": { 00:09:16.679 "read": true, 00:09:16.679 "write": true, 00:09:16.679 "unmap": false, 00:09:16.679 "flush": false, 00:09:16.679 "reset": true, 00:09:16.679 "nvme_admin": false, 00:09:16.679 "nvme_io": false, 00:09:16.679 "nvme_io_md": false, 00:09:16.679 "write_zeroes": true, 00:09:16.679 "zcopy": false, 00:09:16.679 "get_zone_info": false, 00:09:16.679 "zone_management": false, 00:09:16.679 "zone_append": false, 00:09:16.679 "compare": false, 00:09:16.679 "compare_and_write": false, 00:09:16.679 "abort": false, 00:09:16.679 "seek_hole": false, 00:09:16.679 "seek_data": false, 00:09:16.679 "copy": false, 00:09:16.679 "nvme_iov_md": false 00:09:16.679 }, 00:09:16.679 "memory_domains": [ 00:09:16.679 { 00:09:16.679 "dma_device_id": "system", 00:09:16.679 "dma_device_type": 1 00:09:16.679 }, 00:09:16.679 { 00:09:16.679 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:16.679 "dma_device_type": 2 00:09:16.679 }, 00:09:16.679 { 00:09:16.679 "dma_device_id": "system", 00:09:16.679 "dma_device_type": 1 00:09:16.679 }, 00:09:16.679 { 00:09:16.679 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:16.679 "dma_device_type": 2 00:09:16.679 }, 00:09:16.679 { 00:09:16.679 "dma_device_id": "system", 00:09:16.679 "dma_device_type": 1 00:09:16.679 }, 00:09:16.679 { 00:09:16.679 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:16.679 "dma_device_type": 2 00:09:16.679 } 00:09:16.679 ], 00:09:16.679 "driver_specific": { 00:09:16.679 "raid": { 00:09:16.679 "uuid": "a892f2c6-220e-4c4b-8a94-f601b75e2a88", 00:09:16.679 "strip_size_kb": 0, 00:09:16.679 "state": "online", 00:09:16.679 "raid_level": "raid1", 00:09:16.679 "superblock": false, 00:09:16.679 "num_base_bdevs": 3, 00:09:16.679 "num_base_bdevs_discovered": 3, 00:09:16.679 "num_base_bdevs_operational": 3, 00:09:16.679 "base_bdevs_list": [ 00:09:16.679 { 00:09:16.679 "name": "NewBaseBdev", 00:09:16.679 "uuid": "283f4c3b-3991-4d8d-b995-3278cb0196ab", 00:09:16.679 "is_configured": true, 00:09:16.679 "data_offset": 0, 00:09:16.679 "data_size": 65536 00:09:16.679 }, 00:09:16.679 { 00:09:16.679 "name": "BaseBdev2", 00:09:16.679 "uuid": "53d0cc5b-7128-49fe-aa4d-729b178d7fd9", 00:09:16.679 "is_configured": true, 00:09:16.679 "data_offset": 0, 00:09:16.679 "data_size": 65536 00:09:16.679 }, 00:09:16.679 { 00:09:16.679 "name": "BaseBdev3", 00:09:16.679 "uuid": "9dd0b4a9-9202-40d3-88c1-e7938131af95", 00:09:16.679 "is_configured": true, 00:09:16.679 "data_offset": 0, 00:09:16.679 "data_size": 65536 00:09:16.679 } 00:09:16.679 ] 00:09:16.679 } 00:09:16.679 } 00:09:16.679 }' 00:09:16.679 20:04:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:16.679 20:04:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:16.679 BaseBdev2 00:09:16.679 BaseBdev3' 00:09:16.679 20:04:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:16.679 20:04:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:16.679 20:04:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:16.679 20:04:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:16.679 20:04:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.679 20:04:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:16.679 20:04:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.679 20:04:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.679 20:04:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:16.679 20:04:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:16.679 20:04:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:16.679 20:04:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:16.679 20:04:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:16.679 20:04:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.679 20:04:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.679 20:04:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.679 20:04:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:16.679 20:04:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:16.679 20:04:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:16.679 20:04:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:16.679 20:04:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.679 20:04:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.679 20:04:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:16.679 20:04:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.679 20:04:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:16.679 20:04:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:16.679 20:04:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:16.679 20:04:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.939 20:04:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.939 [2024-12-08 20:04:48.661787] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:16.939 [2024-12-08 20:04:48.661823] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:16.939 [2024-12-08 20:04:48.661903] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:16.939 [2024-12-08 20:04:48.662217] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:16.939 [2024-12-08 20:04:48.662236] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:09:16.939 20:04:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.939 20:04:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 67219 00:09:16.939 20:04:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 67219 ']' 00:09:16.939 20:04:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 67219 00:09:16.939 20:04:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:09:16.939 20:04:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:16.939 20:04:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67219 00:09:16.939 20:04:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:16.939 20:04:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:16.939 killing process with pid 67219 00:09:16.939 20:04:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67219' 00:09:16.939 20:04:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 67219 00:09:16.939 [2024-12-08 20:04:48.709886] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:16.939 20:04:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 67219 00:09:17.199 [2024-12-08 20:04:49.013369] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:18.581 20:04:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:18.581 00:09:18.581 real 0m10.561s 00:09:18.581 user 0m16.849s 00:09:18.581 sys 0m1.797s 00:09:18.581 20:04:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:18.581 ************************************ 00:09:18.581 END TEST raid_state_function_test 00:09:18.581 ************************************ 00:09:18.581 20:04:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.581 20:04:50 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:09:18.581 20:04:50 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:18.581 20:04:50 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:18.581 20:04:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:18.581 ************************************ 00:09:18.581 START TEST raid_state_function_test_sb 00:09:18.581 ************************************ 00:09:18.581 20:04:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 3 true 00:09:18.581 20:04:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:09:18.581 20:04:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:18.581 20:04:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:18.581 20:04:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:18.581 20:04:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:18.581 20:04:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:18.581 20:04:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:18.581 20:04:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:18.581 20:04:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:18.581 20:04:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:18.581 20:04:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:18.581 20:04:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:18.581 20:04:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:18.582 20:04:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:18.582 20:04:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:18.582 20:04:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:18.582 20:04:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:18.582 20:04:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:18.582 20:04:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:18.582 20:04:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:18.582 20:04:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:18.582 20:04:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:09:18.582 20:04:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:09:18.582 20:04:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:18.582 20:04:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:18.582 20:04:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=67840 00:09:18.582 20:04:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:18.582 Process raid pid: 67840 00:09:18.582 20:04:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 67840' 00:09:18.582 20:04:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 67840 00:09:18.582 20:04:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 67840 ']' 00:09:18.582 20:04:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:18.582 20:04:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:18.582 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:18.582 20:04:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:18.582 20:04:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:18.582 20:04:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.582 [2024-12-08 20:04:50.282726] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:09:18.582 [2024-12-08 20:04:50.282840] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:18.582 [2024-12-08 20:04:50.436575] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:18.582 [2024-12-08 20:04:50.551582] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:18.842 [2024-12-08 20:04:50.752762] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:18.842 [2024-12-08 20:04:50.752824] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:19.413 20:04:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:19.413 20:04:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:09:19.413 20:04:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:19.413 20:04:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.413 20:04:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.413 [2024-12-08 20:04:51.113879] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:19.413 [2024-12-08 20:04:51.113938] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:19.413 [2024-12-08 20:04:51.113966] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:19.413 [2024-12-08 20:04:51.113977] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:19.413 [2024-12-08 20:04:51.113983] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:19.413 [2024-12-08 20:04:51.113992] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:19.413 20:04:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.413 20:04:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:19.413 20:04:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:19.413 20:04:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:19.413 20:04:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:19.413 20:04:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:19.413 20:04:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:19.413 20:04:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:19.413 20:04:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:19.413 20:04:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:19.413 20:04:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:19.413 20:04:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:19.413 20:04:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.413 20:04:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:19.413 20:04:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.413 20:04:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.413 20:04:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:19.413 "name": "Existed_Raid", 00:09:19.413 "uuid": "fda5c16f-9cc9-4ca9-b8af-d45a2be24898", 00:09:19.413 "strip_size_kb": 0, 00:09:19.413 "state": "configuring", 00:09:19.413 "raid_level": "raid1", 00:09:19.413 "superblock": true, 00:09:19.413 "num_base_bdevs": 3, 00:09:19.413 "num_base_bdevs_discovered": 0, 00:09:19.413 "num_base_bdevs_operational": 3, 00:09:19.413 "base_bdevs_list": [ 00:09:19.413 { 00:09:19.413 "name": "BaseBdev1", 00:09:19.413 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:19.413 "is_configured": false, 00:09:19.413 "data_offset": 0, 00:09:19.413 "data_size": 0 00:09:19.413 }, 00:09:19.413 { 00:09:19.413 "name": "BaseBdev2", 00:09:19.413 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:19.413 "is_configured": false, 00:09:19.413 "data_offset": 0, 00:09:19.413 "data_size": 0 00:09:19.413 }, 00:09:19.413 { 00:09:19.413 "name": "BaseBdev3", 00:09:19.413 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:19.413 "is_configured": false, 00:09:19.413 "data_offset": 0, 00:09:19.413 "data_size": 0 00:09:19.413 } 00:09:19.413 ] 00:09:19.413 }' 00:09:19.413 20:04:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:19.413 20:04:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.673 20:04:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:19.673 20:04:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.673 20:04:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.673 [2024-12-08 20:04:51.581059] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:19.673 [2024-12-08 20:04:51.581101] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:19.673 20:04:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.673 20:04:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:19.673 20:04:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.673 20:04:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.673 [2024-12-08 20:04:51.593046] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:19.673 [2024-12-08 20:04:51.593086] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:19.673 [2024-12-08 20:04:51.593095] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:19.673 [2024-12-08 20:04:51.593104] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:19.673 [2024-12-08 20:04:51.593111] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:19.673 [2024-12-08 20:04:51.593120] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:19.673 20:04:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.673 20:04:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:19.673 20:04:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.673 20:04:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.673 [2024-12-08 20:04:51.640506] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:19.673 BaseBdev1 00:09:19.673 20:04:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.673 20:04:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:19.673 20:04:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:19.673 20:04:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:19.673 20:04:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:19.673 20:04:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:19.673 20:04:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:19.673 20:04:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:19.673 20:04:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.673 20:04:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.933 20:04:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.933 20:04:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:19.933 20:04:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.933 20:04:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.933 [ 00:09:19.933 { 00:09:19.933 "name": "BaseBdev1", 00:09:19.933 "aliases": [ 00:09:19.933 "7da31f3b-9fd5-4e73-b4fa-40a6f8c0b9d3" 00:09:19.933 ], 00:09:19.933 "product_name": "Malloc disk", 00:09:19.933 "block_size": 512, 00:09:19.933 "num_blocks": 65536, 00:09:19.933 "uuid": "7da31f3b-9fd5-4e73-b4fa-40a6f8c0b9d3", 00:09:19.933 "assigned_rate_limits": { 00:09:19.933 "rw_ios_per_sec": 0, 00:09:19.933 "rw_mbytes_per_sec": 0, 00:09:19.933 "r_mbytes_per_sec": 0, 00:09:19.933 "w_mbytes_per_sec": 0 00:09:19.933 }, 00:09:19.933 "claimed": true, 00:09:19.933 "claim_type": "exclusive_write", 00:09:19.933 "zoned": false, 00:09:19.933 "supported_io_types": { 00:09:19.933 "read": true, 00:09:19.933 "write": true, 00:09:19.933 "unmap": true, 00:09:19.933 "flush": true, 00:09:19.933 "reset": true, 00:09:19.933 "nvme_admin": false, 00:09:19.933 "nvme_io": false, 00:09:19.933 "nvme_io_md": false, 00:09:19.933 "write_zeroes": true, 00:09:19.933 "zcopy": true, 00:09:19.933 "get_zone_info": false, 00:09:19.933 "zone_management": false, 00:09:19.933 "zone_append": false, 00:09:19.933 "compare": false, 00:09:19.933 "compare_and_write": false, 00:09:19.933 "abort": true, 00:09:19.933 "seek_hole": false, 00:09:19.933 "seek_data": false, 00:09:19.933 "copy": true, 00:09:19.933 "nvme_iov_md": false 00:09:19.933 }, 00:09:19.933 "memory_domains": [ 00:09:19.933 { 00:09:19.933 "dma_device_id": "system", 00:09:19.933 "dma_device_type": 1 00:09:19.933 }, 00:09:19.933 { 00:09:19.933 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:19.933 "dma_device_type": 2 00:09:19.933 } 00:09:19.933 ], 00:09:19.933 "driver_specific": {} 00:09:19.933 } 00:09:19.933 ] 00:09:19.933 20:04:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.933 20:04:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:19.933 20:04:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:19.933 20:04:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:19.933 20:04:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:19.933 20:04:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:19.933 20:04:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:19.933 20:04:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:19.933 20:04:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:19.933 20:04:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:19.933 20:04:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:19.933 20:04:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:19.933 20:04:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:19.933 20:04:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:19.933 20:04:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.933 20:04:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.933 20:04:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.933 20:04:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:19.933 "name": "Existed_Raid", 00:09:19.933 "uuid": "40385590-d1e6-4aac-aeb0-9b0060a502e5", 00:09:19.933 "strip_size_kb": 0, 00:09:19.933 "state": "configuring", 00:09:19.933 "raid_level": "raid1", 00:09:19.933 "superblock": true, 00:09:19.933 "num_base_bdevs": 3, 00:09:19.933 "num_base_bdevs_discovered": 1, 00:09:19.933 "num_base_bdevs_operational": 3, 00:09:19.933 "base_bdevs_list": [ 00:09:19.933 { 00:09:19.933 "name": "BaseBdev1", 00:09:19.933 "uuid": "7da31f3b-9fd5-4e73-b4fa-40a6f8c0b9d3", 00:09:19.933 "is_configured": true, 00:09:19.933 "data_offset": 2048, 00:09:19.933 "data_size": 63488 00:09:19.933 }, 00:09:19.933 { 00:09:19.933 "name": "BaseBdev2", 00:09:19.933 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:19.933 "is_configured": false, 00:09:19.933 "data_offset": 0, 00:09:19.933 "data_size": 0 00:09:19.933 }, 00:09:19.933 { 00:09:19.933 "name": "BaseBdev3", 00:09:19.933 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:19.933 "is_configured": false, 00:09:19.933 "data_offset": 0, 00:09:19.933 "data_size": 0 00:09:19.933 } 00:09:19.933 ] 00:09:19.933 }' 00:09:19.933 20:04:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:19.933 20:04:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.192 20:04:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:20.192 20:04:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.192 20:04:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.192 [2024-12-08 20:04:52.119762] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:20.192 [2024-12-08 20:04:52.119821] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:20.193 20:04:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.193 20:04:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:20.193 20:04:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.193 20:04:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.193 [2024-12-08 20:04:52.131775] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:20.193 [2024-12-08 20:04:52.133646] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:20.193 [2024-12-08 20:04:52.133690] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:20.193 [2024-12-08 20:04:52.133700] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:20.193 [2024-12-08 20:04:52.133708] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:20.193 20:04:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.193 20:04:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:20.193 20:04:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:20.193 20:04:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:20.193 20:04:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:20.193 20:04:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:20.193 20:04:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:20.193 20:04:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:20.193 20:04:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:20.193 20:04:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:20.193 20:04:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:20.193 20:04:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:20.193 20:04:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:20.193 20:04:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:20.193 20:04:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.193 20:04:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.193 20:04:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:20.193 20:04:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.452 20:04:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:20.452 "name": "Existed_Raid", 00:09:20.452 "uuid": "7fed6eda-da1d-4bc8-98b4-4fd85ae2be8e", 00:09:20.452 "strip_size_kb": 0, 00:09:20.452 "state": "configuring", 00:09:20.452 "raid_level": "raid1", 00:09:20.452 "superblock": true, 00:09:20.452 "num_base_bdevs": 3, 00:09:20.452 "num_base_bdevs_discovered": 1, 00:09:20.452 "num_base_bdevs_operational": 3, 00:09:20.452 "base_bdevs_list": [ 00:09:20.452 { 00:09:20.452 "name": "BaseBdev1", 00:09:20.452 "uuid": "7da31f3b-9fd5-4e73-b4fa-40a6f8c0b9d3", 00:09:20.452 "is_configured": true, 00:09:20.452 "data_offset": 2048, 00:09:20.452 "data_size": 63488 00:09:20.452 }, 00:09:20.452 { 00:09:20.452 "name": "BaseBdev2", 00:09:20.452 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:20.452 "is_configured": false, 00:09:20.452 "data_offset": 0, 00:09:20.452 "data_size": 0 00:09:20.452 }, 00:09:20.452 { 00:09:20.452 "name": "BaseBdev3", 00:09:20.452 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:20.452 "is_configured": false, 00:09:20.452 "data_offset": 0, 00:09:20.452 "data_size": 0 00:09:20.452 } 00:09:20.452 ] 00:09:20.452 }' 00:09:20.452 20:04:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:20.452 20:04:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.712 20:04:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:20.712 20:04:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.712 20:04:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.712 [2024-12-08 20:04:52.617906] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:20.712 BaseBdev2 00:09:20.712 20:04:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.712 20:04:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:20.712 20:04:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:20.712 20:04:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:20.712 20:04:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:20.712 20:04:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:20.712 20:04:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:20.712 20:04:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:20.712 20:04:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.712 20:04:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.712 20:04:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.712 20:04:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:20.712 20:04:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.712 20:04:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.712 [ 00:09:20.712 { 00:09:20.712 "name": "BaseBdev2", 00:09:20.712 "aliases": [ 00:09:20.712 "f9d6bc79-911d-44ab-a772-ec9b7ab9df9a" 00:09:20.712 ], 00:09:20.712 "product_name": "Malloc disk", 00:09:20.712 "block_size": 512, 00:09:20.712 "num_blocks": 65536, 00:09:20.712 "uuid": "f9d6bc79-911d-44ab-a772-ec9b7ab9df9a", 00:09:20.712 "assigned_rate_limits": { 00:09:20.712 "rw_ios_per_sec": 0, 00:09:20.712 "rw_mbytes_per_sec": 0, 00:09:20.712 "r_mbytes_per_sec": 0, 00:09:20.712 "w_mbytes_per_sec": 0 00:09:20.712 }, 00:09:20.712 "claimed": true, 00:09:20.712 "claim_type": "exclusive_write", 00:09:20.712 "zoned": false, 00:09:20.712 "supported_io_types": { 00:09:20.712 "read": true, 00:09:20.712 "write": true, 00:09:20.712 "unmap": true, 00:09:20.712 "flush": true, 00:09:20.712 "reset": true, 00:09:20.712 "nvme_admin": false, 00:09:20.712 "nvme_io": false, 00:09:20.712 "nvme_io_md": false, 00:09:20.712 "write_zeroes": true, 00:09:20.712 "zcopy": true, 00:09:20.712 "get_zone_info": false, 00:09:20.712 "zone_management": false, 00:09:20.712 "zone_append": false, 00:09:20.712 "compare": false, 00:09:20.712 "compare_and_write": false, 00:09:20.712 "abort": true, 00:09:20.712 "seek_hole": false, 00:09:20.712 "seek_data": false, 00:09:20.712 "copy": true, 00:09:20.712 "nvme_iov_md": false 00:09:20.712 }, 00:09:20.712 "memory_domains": [ 00:09:20.712 { 00:09:20.712 "dma_device_id": "system", 00:09:20.712 "dma_device_type": 1 00:09:20.712 }, 00:09:20.712 { 00:09:20.712 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:20.712 "dma_device_type": 2 00:09:20.712 } 00:09:20.712 ], 00:09:20.712 "driver_specific": {} 00:09:20.712 } 00:09:20.712 ] 00:09:20.712 20:04:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.712 20:04:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:20.712 20:04:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:20.712 20:04:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:20.712 20:04:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:20.712 20:04:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:20.712 20:04:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:20.712 20:04:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:20.712 20:04:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:20.712 20:04:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:20.712 20:04:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:20.712 20:04:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:20.712 20:04:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:20.712 20:04:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:20.712 20:04:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:20.712 20:04:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.712 20:04:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:20.712 20:04:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.712 20:04:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.972 20:04:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:20.972 "name": "Existed_Raid", 00:09:20.972 "uuid": "7fed6eda-da1d-4bc8-98b4-4fd85ae2be8e", 00:09:20.972 "strip_size_kb": 0, 00:09:20.972 "state": "configuring", 00:09:20.972 "raid_level": "raid1", 00:09:20.972 "superblock": true, 00:09:20.972 "num_base_bdevs": 3, 00:09:20.972 "num_base_bdevs_discovered": 2, 00:09:20.972 "num_base_bdevs_operational": 3, 00:09:20.972 "base_bdevs_list": [ 00:09:20.972 { 00:09:20.972 "name": "BaseBdev1", 00:09:20.972 "uuid": "7da31f3b-9fd5-4e73-b4fa-40a6f8c0b9d3", 00:09:20.972 "is_configured": true, 00:09:20.972 "data_offset": 2048, 00:09:20.972 "data_size": 63488 00:09:20.972 }, 00:09:20.972 { 00:09:20.972 "name": "BaseBdev2", 00:09:20.972 "uuid": "f9d6bc79-911d-44ab-a772-ec9b7ab9df9a", 00:09:20.972 "is_configured": true, 00:09:20.972 "data_offset": 2048, 00:09:20.972 "data_size": 63488 00:09:20.972 }, 00:09:20.972 { 00:09:20.972 "name": "BaseBdev3", 00:09:20.972 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:20.972 "is_configured": false, 00:09:20.972 "data_offset": 0, 00:09:20.972 "data_size": 0 00:09:20.972 } 00:09:20.972 ] 00:09:20.972 }' 00:09:20.972 20:04:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:20.972 20:04:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.232 20:04:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:21.232 20:04:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.232 20:04:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.232 [2024-12-08 20:04:53.125337] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:21.232 [2024-12-08 20:04:53.125609] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:21.232 [2024-12-08 20:04:53.125629] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:21.232 [2024-12-08 20:04:53.125958] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:21.232 [2024-12-08 20:04:53.126144] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:21.232 [2024-12-08 20:04:53.126161] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:21.232 BaseBdev3 00:09:21.232 [2024-12-08 20:04:53.126349] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:21.232 20:04:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.232 20:04:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:21.232 20:04:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:21.232 20:04:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:21.232 20:04:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:21.232 20:04:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:21.232 20:04:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:21.232 20:04:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:21.232 20:04:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.232 20:04:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.232 20:04:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.232 20:04:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:21.232 20:04:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.232 20:04:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.232 [ 00:09:21.232 { 00:09:21.232 "name": "BaseBdev3", 00:09:21.232 "aliases": [ 00:09:21.232 "6e827051-d261-49ad-83a4-e8ea802e63c0" 00:09:21.232 ], 00:09:21.232 "product_name": "Malloc disk", 00:09:21.232 "block_size": 512, 00:09:21.232 "num_blocks": 65536, 00:09:21.232 "uuid": "6e827051-d261-49ad-83a4-e8ea802e63c0", 00:09:21.232 "assigned_rate_limits": { 00:09:21.232 "rw_ios_per_sec": 0, 00:09:21.232 "rw_mbytes_per_sec": 0, 00:09:21.232 "r_mbytes_per_sec": 0, 00:09:21.232 "w_mbytes_per_sec": 0 00:09:21.232 }, 00:09:21.232 "claimed": true, 00:09:21.232 "claim_type": "exclusive_write", 00:09:21.232 "zoned": false, 00:09:21.232 "supported_io_types": { 00:09:21.232 "read": true, 00:09:21.232 "write": true, 00:09:21.232 "unmap": true, 00:09:21.232 "flush": true, 00:09:21.232 "reset": true, 00:09:21.232 "nvme_admin": false, 00:09:21.232 "nvme_io": false, 00:09:21.232 "nvme_io_md": false, 00:09:21.232 "write_zeroes": true, 00:09:21.232 "zcopy": true, 00:09:21.232 "get_zone_info": false, 00:09:21.232 "zone_management": false, 00:09:21.232 "zone_append": false, 00:09:21.232 "compare": false, 00:09:21.232 "compare_and_write": false, 00:09:21.232 "abort": true, 00:09:21.232 "seek_hole": false, 00:09:21.232 "seek_data": false, 00:09:21.232 "copy": true, 00:09:21.232 "nvme_iov_md": false 00:09:21.232 }, 00:09:21.232 "memory_domains": [ 00:09:21.232 { 00:09:21.232 "dma_device_id": "system", 00:09:21.232 "dma_device_type": 1 00:09:21.232 }, 00:09:21.232 { 00:09:21.232 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:21.232 "dma_device_type": 2 00:09:21.232 } 00:09:21.232 ], 00:09:21.232 "driver_specific": {} 00:09:21.232 } 00:09:21.232 ] 00:09:21.232 20:04:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.232 20:04:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:21.232 20:04:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:21.232 20:04:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:21.233 20:04:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:21.233 20:04:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:21.233 20:04:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:21.233 20:04:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:21.233 20:04:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:21.233 20:04:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:21.233 20:04:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:21.233 20:04:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:21.233 20:04:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:21.233 20:04:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:21.233 20:04:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:21.233 20:04:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:21.233 20:04:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.233 20:04:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.233 20:04:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.233 20:04:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:21.233 "name": "Existed_Raid", 00:09:21.233 "uuid": "7fed6eda-da1d-4bc8-98b4-4fd85ae2be8e", 00:09:21.233 "strip_size_kb": 0, 00:09:21.233 "state": "online", 00:09:21.233 "raid_level": "raid1", 00:09:21.233 "superblock": true, 00:09:21.233 "num_base_bdevs": 3, 00:09:21.233 "num_base_bdevs_discovered": 3, 00:09:21.233 "num_base_bdevs_operational": 3, 00:09:21.233 "base_bdevs_list": [ 00:09:21.233 { 00:09:21.233 "name": "BaseBdev1", 00:09:21.233 "uuid": "7da31f3b-9fd5-4e73-b4fa-40a6f8c0b9d3", 00:09:21.233 "is_configured": true, 00:09:21.233 "data_offset": 2048, 00:09:21.233 "data_size": 63488 00:09:21.233 }, 00:09:21.233 { 00:09:21.233 "name": "BaseBdev2", 00:09:21.233 "uuid": "f9d6bc79-911d-44ab-a772-ec9b7ab9df9a", 00:09:21.233 "is_configured": true, 00:09:21.233 "data_offset": 2048, 00:09:21.233 "data_size": 63488 00:09:21.233 }, 00:09:21.233 { 00:09:21.233 "name": "BaseBdev3", 00:09:21.233 "uuid": "6e827051-d261-49ad-83a4-e8ea802e63c0", 00:09:21.233 "is_configured": true, 00:09:21.233 "data_offset": 2048, 00:09:21.233 "data_size": 63488 00:09:21.233 } 00:09:21.233 ] 00:09:21.233 }' 00:09:21.233 20:04:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:21.233 20:04:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.801 20:04:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:21.801 20:04:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:21.801 20:04:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:21.801 20:04:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:21.801 20:04:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:21.801 20:04:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:21.801 20:04:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:21.801 20:04:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:21.801 20:04:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.801 20:04:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.801 [2024-12-08 20:04:53.584928] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:21.801 20:04:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.801 20:04:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:21.801 "name": "Existed_Raid", 00:09:21.801 "aliases": [ 00:09:21.801 "7fed6eda-da1d-4bc8-98b4-4fd85ae2be8e" 00:09:21.801 ], 00:09:21.801 "product_name": "Raid Volume", 00:09:21.801 "block_size": 512, 00:09:21.801 "num_blocks": 63488, 00:09:21.801 "uuid": "7fed6eda-da1d-4bc8-98b4-4fd85ae2be8e", 00:09:21.801 "assigned_rate_limits": { 00:09:21.801 "rw_ios_per_sec": 0, 00:09:21.801 "rw_mbytes_per_sec": 0, 00:09:21.801 "r_mbytes_per_sec": 0, 00:09:21.801 "w_mbytes_per_sec": 0 00:09:21.801 }, 00:09:21.801 "claimed": false, 00:09:21.801 "zoned": false, 00:09:21.801 "supported_io_types": { 00:09:21.801 "read": true, 00:09:21.801 "write": true, 00:09:21.801 "unmap": false, 00:09:21.801 "flush": false, 00:09:21.801 "reset": true, 00:09:21.801 "nvme_admin": false, 00:09:21.801 "nvme_io": false, 00:09:21.801 "nvme_io_md": false, 00:09:21.801 "write_zeroes": true, 00:09:21.801 "zcopy": false, 00:09:21.801 "get_zone_info": false, 00:09:21.801 "zone_management": false, 00:09:21.801 "zone_append": false, 00:09:21.801 "compare": false, 00:09:21.801 "compare_and_write": false, 00:09:21.801 "abort": false, 00:09:21.801 "seek_hole": false, 00:09:21.801 "seek_data": false, 00:09:21.801 "copy": false, 00:09:21.801 "nvme_iov_md": false 00:09:21.801 }, 00:09:21.801 "memory_domains": [ 00:09:21.801 { 00:09:21.801 "dma_device_id": "system", 00:09:21.801 "dma_device_type": 1 00:09:21.801 }, 00:09:21.801 { 00:09:21.801 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:21.801 "dma_device_type": 2 00:09:21.801 }, 00:09:21.801 { 00:09:21.801 "dma_device_id": "system", 00:09:21.801 "dma_device_type": 1 00:09:21.801 }, 00:09:21.801 { 00:09:21.801 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:21.801 "dma_device_type": 2 00:09:21.801 }, 00:09:21.801 { 00:09:21.801 "dma_device_id": "system", 00:09:21.801 "dma_device_type": 1 00:09:21.801 }, 00:09:21.801 { 00:09:21.801 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:21.801 "dma_device_type": 2 00:09:21.801 } 00:09:21.801 ], 00:09:21.801 "driver_specific": { 00:09:21.801 "raid": { 00:09:21.801 "uuid": "7fed6eda-da1d-4bc8-98b4-4fd85ae2be8e", 00:09:21.801 "strip_size_kb": 0, 00:09:21.801 "state": "online", 00:09:21.801 "raid_level": "raid1", 00:09:21.801 "superblock": true, 00:09:21.801 "num_base_bdevs": 3, 00:09:21.801 "num_base_bdevs_discovered": 3, 00:09:21.801 "num_base_bdevs_operational": 3, 00:09:21.801 "base_bdevs_list": [ 00:09:21.801 { 00:09:21.801 "name": "BaseBdev1", 00:09:21.801 "uuid": "7da31f3b-9fd5-4e73-b4fa-40a6f8c0b9d3", 00:09:21.801 "is_configured": true, 00:09:21.801 "data_offset": 2048, 00:09:21.801 "data_size": 63488 00:09:21.801 }, 00:09:21.801 { 00:09:21.801 "name": "BaseBdev2", 00:09:21.801 "uuid": "f9d6bc79-911d-44ab-a772-ec9b7ab9df9a", 00:09:21.801 "is_configured": true, 00:09:21.801 "data_offset": 2048, 00:09:21.801 "data_size": 63488 00:09:21.801 }, 00:09:21.801 { 00:09:21.801 "name": "BaseBdev3", 00:09:21.801 "uuid": "6e827051-d261-49ad-83a4-e8ea802e63c0", 00:09:21.801 "is_configured": true, 00:09:21.801 "data_offset": 2048, 00:09:21.801 "data_size": 63488 00:09:21.801 } 00:09:21.801 ] 00:09:21.801 } 00:09:21.801 } 00:09:21.801 }' 00:09:21.801 20:04:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:21.801 20:04:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:21.801 BaseBdev2 00:09:21.801 BaseBdev3' 00:09:21.801 20:04:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:21.801 20:04:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:21.801 20:04:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:21.801 20:04:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:21.801 20:04:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:21.801 20:04:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.801 20:04:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.801 20:04:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.801 20:04:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:21.801 20:04:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:21.801 20:04:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:21.801 20:04:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:21.801 20:04:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:21.801 20:04:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.801 20:04:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.801 20:04:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.061 20:04:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:22.061 20:04:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:22.061 20:04:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:22.061 20:04:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:22.061 20:04:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:22.061 20:04:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.061 20:04:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.061 20:04:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.061 20:04:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:22.061 20:04:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:22.061 20:04:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:22.061 20:04:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.061 20:04:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.061 [2024-12-08 20:04:53.832260] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:22.061 20:04:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.061 20:04:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:22.061 20:04:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:09:22.061 20:04:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:22.061 20:04:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:09:22.061 20:04:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:09:22.061 20:04:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:09:22.061 20:04:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:22.061 20:04:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:22.061 20:04:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:22.061 20:04:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:22.061 20:04:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:22.061 20:04:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:22.061 20:04:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:22.061 20:04:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:22.061 20:04:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:22.061 20:04:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:22.061 20:04:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:22.061 20:04:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.061 20:04:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.061 20:04:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.061 20:04:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:22.061 "name": "Existed_Raid", 00:09:22.061 "uuid": "7fed6eda-da1d-4bc8-98b4-4fd85ae2be8e", 00:09:22.061 "strip_size_kb": 0, 00:09:22.061 "state": "online", 00:09:22.061 "raid_level": "raid1", 00:09:22.061 "superblock": true, 00:09:22.061 "num_base_bdevs": 3, 00:09:22.061 "num_base_bdevs_discovered": 2, 00:09:22.061 "num_base_bdevs_operational": 2, 00:09:22.061 "base_bdevs_list": [ 00:09:22.061 { 00:09:22.061 "name": null, 00:09:22.061 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:22.061 "is_configured": false, 00:09:22.061 "data_offset": 0, 00:09:22.061 "data_size": 63488 00:09:22.061 }, 00:09:22.061 { 00:09:22.061 "name": "BaseBdev2", 00:09:22.061 "uuid": "f9d6bc79-911d-44ab-a772-ec9b7ab9df9a", 00:09:22.061 "is_configured": true, 00:09:22.061 "data_offset": 2048, 00:09:22.061 "data_size": 63488 00:09:22.061 }, 00:09:22.061 { 00:09:22.061 "name": "BaseBdev3", 00:09:22.061 "uuid": "6e827051-d261-49ad-83a4-e8ea802e63c0", 00:09:22.061 "is_configured": true, 00:09:22.061 "data_offset": 2048, 00:09:22.061 "data_size": 63488 00:09:22.061 } 00:09:22.061 ] 00:09:22.061 }' 00:09:22.061 20:04:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:22.061 20:04:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.629 20:04:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:22.629 20:04:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:22.629 20:04:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:22.629 20:04:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.629 20:04:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.629 20:04:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:22.629 20:04:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.629 20:04:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:22.629 20:04:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:22.629 20:04:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:22.629 20:04:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.629 20:04:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.629 [2024-12-08 20:04:54.397838] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:22.629 20:04:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.629 20:04:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:22.629 20:04:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:22.629 20:04:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:22.629 20:04:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:22.629 20:04:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.629 20:04:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.629 20:04:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.629 20:04:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:22.629 20:04:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:22.629 20:04:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:22.629 20:04:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.629 20:04:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.629 [2024-12-08 20:04:54.551396] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:22.629 [2024-12-08 20:04:54.551507] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:22.888 [2024-12-08 20:04:54.646398] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:22.888 [2024-12-08 20:04:54.646455] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:22.888 [2024-12-08 20:04:54.646466] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:22.888 20:04:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.888 20:04:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:22.888 20:04:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:22.888 20:04:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:22.888 20:04:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:22.888 20:04:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.888 20:04:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.888 20:04:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.888 20:04:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:22.888 20:04:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:22.888 20:04:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:22.888 20:04:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:22.888 20:04:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:22.888 20:04:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:22.888 20:04:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.888 20:04:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.888 BaseBdev2 00:09:22.888 20:04:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.888 20:04:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:22.888 20:04:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:22.888 20:04:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:22.888 20:04:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:22.888 20:04:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:22.888 20:04:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:22.888 20:04:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:22.889 20:04:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.889 20:04:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.889 20:04:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.889 20:04:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:22.889 20:04:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.889 20:04:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.889 [ 00:09:22.889 { 00:09:22.889 "name": "BaseBdev2", 00:09:22.889 "aliases": [ 00:09:22.889 "5e931ae8-b32d-4eec-9f88-c85c50f6c59a" 00:09:22.889 ], 00:09:22.889 "product_name": "Malloc disk", 00:09:22.889 "block_size": 512, 00:09:22.889 "num_blocks": 65536, 00:09:22.889 "uuid": "5e931ae8-b32d-4eec-9f88-c85c50f6c59a", 00:09:22.889 "assigned_rate_limits": { 00:09:22.889 "rw_ios_per_sec": 0, 00:09:22.889 "rw_mbytes_per_sec": 0, 00:09:22.889 "r_mbytes_per_sec": 0, 00:09:22.889 "w_mbytes_per_sec": 0 00:09:22.889 }, 00:09:22.889 "claimed": false, 00:09:22.889 "zoned": false, 00:09:22.889 "supported_io_types": { 00:09:22.889 "read": true, 00:09:22.889 "write": true, 00:09:22.889 "unmap": true, 00:09:22.889 "flush": true, 00:09:22.889 "reset": true, 00:09:22.889 "nvme_admin": false, 00:09:22.889 "nvme_io": false, 00:09:22.889 "nvme_io_md": false, 00:09:22.889 "write_zeroes": true, 00:09:22.889 "zcopy": true, 00:09:22.889 "get_zone_info": false, 00:09:22.889 "zone_management": false, 00:09:22.889 "zone_append": false, 00:09:22.889 "compare": false, 00:09:22.889 "compare_and_write": false, 00:09:22.889 "abort": true, 00:09:22.889 "seek_hole": false, 00:09:22.889 "seek_data": false, 00:09:22.889 "copy": true, 00:09:22.889 "nvme_iov_md": false 00:09:22.889 }, 00:09:22.889 "memory_domains": [ 00:09:22.889 { 00:09:22.889 "dma_device_id": "system", 00:09:22.889 "dma_device_type": 1 00:09:22.889 }, 00:09:22.889 { 00:09:22.889 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:22.889 "dma_device_type": 2 00:09:22.889 } 00:09:22.889 ], 00:09:22.889 "driver_specific": {} 00:09:22.889 } 00:09:22.889 ] 00:09:22.889 20:04:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.889 20:04:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:22.889 20:04:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:22.889 20:04:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:22.889 20:04:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:22.889 20:04:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.889 20:04:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.889 BaseBdev3 00:09:22.889 20:04:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.889 20:04:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:22.889 20:04:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:22.889 20:04:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:22.889 20:04:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:22.889 20:04:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:22.889 20:04:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:22.889 20:04:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:22.889 20:04:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.889 20:04:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.889 20:04:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.889 20:04:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:22.889 20:04:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.889 20:04:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.889 [ 00:09:22.889 { 00:09:22.889 "name": "BaseBdev3", 00:09:22.889 "aliases": [ 00:09:22.889 "a390785a-e1b4-4e80-a5f9-b6df9741fa0b" 00:09:22.889 ], 00:09:22.889 "product_name": "Malloc disk", 00:09:22.889 "block_size": 512, 00:09:22.889 "num_blocks": 65536, 00:09:22.889 "uuid": "a390785a-e1b4-4e80-a5f9-b6df9741fa0b", 00:09:22.889 "assigned_rate_limits": { 00:09:22.889 "rw_ios_per_sec": 0, 00:09:22.889 "rw_mbytes_per_sec": 0, 00:09:22.889 "r_mbytes_per_sec": 0, 00:09:22.889 "w_mbytes_per_sec": 0 00:09:22.889 }, 00:09:22.889 "claimed": false, 00:09:22.889 "zoned": false, 00:09:22.889 "supported_io_types": { 00:09:22.889 "read": true, 00:09:22.889 "write": true, 00:09:22.889 "unmap": true, 00:09:22.889 "flush": true, 00:09:22.889 "reset": true, 00:09:22.889 "nvme_admin": false, 00:09:22.889 "nvme_io": false, 00:09:22.889 "nvme_io_md": false, 00:09:22.889 "write_zeroes": true, 00:09:22.889 "zcopy": true, 00:09:22.889 "get_zone_info": false, 00:09:22.889 "zone_management": false, 00:09:22.889 "zone_append": false, 00:09:22.889 "compare": false, 00:09:22.889 "compare_and_write": false, 00:09:22.889 "abort": true, 00:09:22.889 "seek_hole": false, 00:09:22.889 "seek_data": false, 00:09:22.889 "copy": true, 00:09:22.889 "nvme_iov_md": false 00:09:22.889 }, 00:09:22.889 "memory_domains": [ 00:09:22.889 { 00:09:22.889 "dma_device_id": "system", 00:09:22.889 "dma_device_type": 1 00:09:22.889 }, 00:09:22.889 { 00:09:22.889 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:22.889 "dma_device_type": 2 00:09:22.889 } 00:09:22.889 ], 00:09:22.889 "driver_specific": {} 00:09:22.889 } 00:09:22.889 ] 00:09:22.889 20:04:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.889 20:04:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:22.889 20:04:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:22.889 20:04:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:22.889 20:04:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:22.889 20:04:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.889 20:04:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.889 [2024-12-08 20:04:54.858937] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:22.889 [2024-12-08 20:04:54.858996] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:22.889 [2024-12-08 20:04:54.859015] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:22.889 [2024-12-08 20:04:54.861002] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:22.889 20:04:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.889 20:04:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:22.889 20:04:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:22.889 20:04:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:23.148 20:04:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:23.148 20:04:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:23.148 20:04:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:23.148 20:04:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:23.148 20:04:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:23.148 20:04:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:23.148 20:04:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:23.148 20:04:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:23.148 20:04:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:23.148 20:04:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.148 20:04:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.148 20:04:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.148 20:04:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:23.148 "name": "Existed_Raid", 00:09:23.148 "uuid": "15a935f4-759e-43c7-a382-c4405426a25d", 00:09:23.148 "strip_size_kb": 0, 00:09:23.148 "state": "configuring", 00:09:23.148 "raid_level": "raid1", 00:09:23.148 "superblock": true, 00:09:23.148 "num_base_bdevs": 3, 00:09:23.148 "num_base_bdevs_discovered": 2, 00:09:23.148 "num_base_bdevs_operational": 3, 00:09:23.148 "base_bdevs_list": [ 00:09:23.148 { 00:09:23.148 "name": "BaseBdev1", 00:09:23.148 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:23.148 "is_configured": false, 00:09:23.148 "data_offset": 0, 00:09:23.148 "data_size": 0 00:09:23.148 }, 00:09:23.148 { 00:09:23.148 "name": "BaseBdev2", 00:09:23.148 "uuid": "5e931ae8-b32d-4eec-9f88-c85c50f6c59a", 00:09:23.148 "is_configured": true, 00:09:23.148 "data_offset": 2048, 00:09:23.148 "data_size": 63488 00:09:23.148 }, 00:09:23.148 { 00:09:23.148 "name": "BaseBdev3", 00:09:23.148 "uuid": "a390785a-e1b4-4e80-a5f9-b6df9741fa0b", 00:09:23.148 "is_configured": true, 00:09:23.148 "data_offset": 2048, 00:09:23.148 "data_size": 63488 00:09:23.148 } 00:09:23.148 ] 00:09:23.148 }' 00:09:23.148 20:04:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:23.148 20:04:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.405 20:04:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:23.405 20:04:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.405 20:04:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.405 [2024-12-08 20:04:55.302208] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:23.405 20:04:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.405 20:04:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:23.405 20:04:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:23.405 20:04:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:23.405 20:04:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:23.405 20:04:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:23.405 20:04:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:23.405 20:04:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:23.405 20:04:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:23.405 20:04:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:23.405 20:04:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:23.405 20:04:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:23.405 20:04:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:23.405 20:04:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.405 20:04:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.405 20:04:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.405 20:04:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:23.405 "name": "Existed_Raid", 00:09:23.405 "uuid": "15a935f4-759e-43c7-a382-c4405426a25d", 00:09:23.405 "strip_size_kb": 0, 00:09:23.405 "state": "configuring", 00:09:23.406 "raid_level": "raid1", 00:09:23.406 "superblock": true, 00:09:23.406 "num_base_bdevs": 3, 00:09:23.406 "num_base_bdevs_discovered": 1, 00:09:23.406 "num_base_bdevs_operational": 3, 00:09:23.406 "base_bdevs_list": [ 00:09:23.406 { 00:09:23.406 "name": "BaseBdev1", 00:09:23.406 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:23.406 "is_configured": false, 00:09:23.406 "data_offset": 0, 00:09:23.406 "data_size": 0 00:09:23.406 }, 00:09:23.406 { 00:09:23.406 "name": null, 00:09:23.406 "uuid": "5e931ae8-b32d-4eec-9f88-c85c50f6c59a", 00:09:23.406 "is_configured": false, 00:09:23.406 "data_offset": 0, 00:09:23.406 "data_size": 63488 00:09:23.406 }, 00:09:23.406 { 00:09:23.406 "name": "BaseBdev3", 00:09:23.406 "uuid": "a390785a-e1b4-4e80-a5f9-b6df9741fa0b", 00:09:23.406 "is_configured": true, 00:09:23.406 "data_offset": 2048, 00:09:23.406 "data_size": 63488 00:09:23.406 } 00:09:23.406 ] 00:09:23.406 }' 00:09:23.406 20:04:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:23.406 20:04:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.974 20:04:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:23.974 20:04:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.974 20:04:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.974 20:04:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:23.974 20:04:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.974 20:04:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:23.974 20:04:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:23.974 20:04:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.974 20:04:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.974 [2024-12-08 20:04:55.829821] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:23.974 BaseBdev1 00:09:23.974 20:04:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.974 20:04:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:23.974 20:04:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:23.974 20:04:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:23.974 20:04:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:23.974 20:04:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:23.974 20:04:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:23.974 20:04:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:23.974 20:04:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.974 20:04:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.974 20:04:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.975 20:04:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:23.975 20:04:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.975 20:04:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.975 [ 00:09:23.975 { 00:09:23.975 "name": "BaseBdev1", 00:09:23.975 "aliases": [ 00:09:23.975 "8dd64dd7-49e1-438b-a70a-70972e2801ad" 00:09:23.975 ], 00:09:23.975 "product_name": "Malloc disk", 00:09:23.975 "block_size": 512, 00:09:23.975 "num_blocks": 65536, 00:09:23.975 "uuid": "8dd64dd7-49e1-438b-a70a-70972e2801ad", 00:09:23.975 "assigned_rate_limits": { 00:09:23.975 "rw_ios_per_sec": 0, 00:09:23.975 "rw_mbytes_per_sec": 0, 00:09:23.975 "r_mbytes_per_sec": 0, 00:09:23.975 "w_mbytes_per_sec": 0 00:09:23.975 }, 00:09:23.975 "claimed": true, 00:09:23.975 "claim_type": "exclusive_write", 00:09:23.975 "zoned": false, 00:09:23.975 "supported_io_types": { 00:09:23.975 "read": true, 00:09:23.975 "write": true, 00:09:23.975 "unmap": true, 00:09:23.975 "flush": true, 00:09:23.975 "reset": true, 00:09:23.975 "nvme_admin": false, 00:09:23.975 "nvme_io": false, 00:09:23.975 "nvme_io_md": false, 00:09:23.975 "write_zeroes": true, 00:09:23.975 "zcopy": true, 00:09:23.975 "get_zone_info": false, 00:09:23.975 "zone_management": false, 00:09:23.975 "zone_append": false, 00:09:23.975 "compare": false, 00:09:23.975 "compare_and_write": false, 00:09:23.975 "abort": true, 00:09:23.975 "seek_hole": false, 00:09:23.975 "seek_data": false, 00:09:23.975 "copy": true, 00:09:23.975 "nvme_iov_md": false 00:09:23.975 }, 00:09:23.975 "memory_domains": [ 00:09:23.975 { 00:09:23.975 "dma_device_id": "system", 00:09:23.975 "dma_device_type": 1 00:09:23.975 }, 00:09:23.975 { 00:09:23.975 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:23.975 "dma_device_type": 2 00:09:23.975 } 00:09:23.975 ], 00:09:23.975 "driver_specific": {} 00:09:23.975 } 00:09:23.975 ] 00:09:23.975 20:04:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.975 20:04:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:23.975 20:04:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:23.975 20:04:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:23.975 20:04:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:23.975 20:04:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:23.975 20:04:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:23.975 20:04:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:23.975 20:04:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:23.975 20:04:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:23.975 20:04:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:23.975 20:04:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:23.975 20:04:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:23.975 20:04:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:23.975 20:04:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.975 20:04:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.975 20:04:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.975 20:04:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:23.975 "name": "Existed_Raid", 00:09:23.975 "uuid": "15a935f4-759e-43c7-a382-c4405426a25d", 00:09:23.975 "strip_size_kb": 0, 00:09:23.975 "state": "configuring", 00:09:23.975 "raid_level": "raid1", 00:09:23.975 "superblock": true, 00:09:23.975 "num_base_bdevs": 3, 00:09:23.975 "num_base_bdevs_discovered": 2, 00:09:23.975 "num_base_bdevs_operational": 3, 00:09:23.975 "base_bdevs_list": [ 00:09:23.975 { 00:09:23.975 "name": "BaseBdev1", 00:09:23.975 "uuid": "8dd64dd7-49e1-438b-a70a-70972e2801ad", 00:09:23.975 "is_configured": true, 00:09:23.975 "data_offset": 2048, 00:09:23.975 "data_size": 63488 00:09:23.975 }, 00:09:23.975 { 00:09:23.975 "name": null, 00:09:23.975 "uuid": "5e931ae8-b32d-4eec-9f88-c85c50f6c59a", 00:09:23.975 "is_configured": false, 00:09:23.975 "data_offset": 0, 00:09:23.975 "data_size": 63488 00:09:23.975 }, 00:09:23.975 { 00:09:23.975 "name": "BaseBdev3", 00:09:23.975 "uuid": "a390785a-e1b4-4e80-a5f9-b6df9741fa0b", 00:09:23.975 "is_configured": true, 00:09:23.975 "data_offset": 2048, 00:09:23.975 "data_size": 63488 00:09:23.975 } 00:09:23.975 ] 00:09:23.975 }' 00:09:23.975 20:04:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:23.975 20:04:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:24.550 20:04:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:24.550 20:04:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.550 20:04:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:24.551 20:04:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:24.551 20:04:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.551 20:04:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:24.551 20:04:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:24.551 20:04:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.551 20:04:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:24.551 [2024-12-08 20:04:56.396903] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:24.551 20:04:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.551 20:04:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:24.551 20:04:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:24.551 20:04:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:24.551 20:04:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:24.551 20:04:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:24.551 20:04:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:24.551 20:04:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:24.551 20:04:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:24.551 20:04:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:24.551 20:04:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:24.551 20:04:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:24.551 20:04:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.551 20:04:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:24.551 20:04:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:24.551 20:04:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.551 20:04:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:24.551 "name": "Existed_Raid", 00:09:24.551 "uuid": "15a935f4-759e-43c7-a382-c4405426a25d", 00:09:24.551 "strip_size_kb": 0, 00:09:24.551 "state": "configuring", 00:09:24.551 "raid_level": "raid1", 00:09:24.551 "superblock": true, 00:09:24.551 "num_base_bdevs": 3, 00:09:24.551 "num_base_bdevs_discovered": 1, 00:09:24.551 "num_base_bdevs_operational": 3, 00:09:24.551 "base_bdevs_list": [ 00:09:24.551 { 00:09:24.551 "name": "BaseBdev1", 00:09:24.551 "uuid": "8dd64dd7-49e1-438b-a70a-70972e2801ad", 00:09:24.551 "is_configured": true, 00:09:24.551 "data_offset": 2048, 00:09:24.551 "data_size": 63488 00:09:24.551 }, 00:09:24.551 { 00:09:24.551 "name": null, 00:09:24.551 "uuid": "5e931ae8-b32d-4eec-9f88-c85c50f6c59a", 00:09:24.551 "is_configured": false, 00:09:24.551 "data_offset": 0, 00:09:24.551 "data_size": 63488 00:09:24.551 }, 00:09:24.551 { 00:09:24.551 "name": null, 00:09:24.551 "uuid": "a390785a-e1b4-4e80-a5f9-b6df9741fa0b", 00:09:24.551 "is_configured": false, 00:09:24.551 "data_offset": 0, 00:09:24.551 "data_size": 63488 00:09:24.551 } 00:09:24.551 ] 00:09:24.551 }' 00:09:24.551 20:04:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:24.551 20:04:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.121 20:04:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:25.121 20:04:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.121 20:04:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:25.121 20:04:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.121 20:04:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.121 20:04:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:25.121 20:04:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:25.121 20:04:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.121 20:04:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.121 [2024-12-08 20:04:56.888114] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:25.121 20:04:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.121 20:04:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:25.121 20:04:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:25.121 20:04:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:25.121 20:04:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:25.121 20:04:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:25.121 20:04:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:25.121 20:04:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:25.121 20:04:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:25.121 20:04:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:25.121 20:04:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:25.121 20:04:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:25.121 20:04:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:25.121 20:04:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.121 20:04:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.121 20:04:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.121 20:04:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:25.121 "name": "Existed_Raid", 00:09:25.121 "uuid": "15a935f4-759e-43c7-a382-c4405426a25d", 00:09:25.121 "strip_size_kb": 0, 00:09:25.121 "state": "configuring", 00:09:25.121 "raid_level": "raid1", 00:09:25.121 "superblock": true, 00:09:25.121 "num_base_bdevs": 3, 00:09:25.121 "num_base_bdevs_discovered": 2, 00:09:25.121 "num_base_bdevs_operational": 3, 00:09:25.121 "base_bdevs_list": [ 00:09:25.121 { 00:09:25.121 "name": "BaseBdev1", 00:09:25.121 "uuid": "8dd64dd7-49e1-438b-a70a-70972e2801ad", 00:09:25.121 "is_configured": true, 00:09:25.121 "data_offset": 2048, 00:09:25.121 "data_size": 63488 00:09:25.121 }, 00:09:25.121 { 00:09:25.121 "name": null, 00:09:25.121 "uuid": "5e931ae8-b32d-4eec-9f88-c85c50f6c59a", 00:09:25.121 "is_configured": false, 00:09:25.121 "data_offset": 0, 00:09:25.121 "data_size": 63488 00:09:25.121 }, 00:09:25.121 { 00:09:25.121 "name": "BaseBdev3", 00:09:25.121 "uuid": "a390785a-e1b4-4e80-a5f9-b6df9741fa0b", 00:09:25.121 "is_configured": true, 00:09:25.121 "data_offset": 2048, 00:09:25.121 "data_size": 63488 00:09:25.121 } 00:09:25.121 ] 00:09:25.121 }' 00:09:25.121 20:04:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:25.121 20:04:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.381 20:04:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:25.381 20:04:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:25.381 20:04:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.381 20:04:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.381 20:04:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.641 20:04:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:25.641 20:04:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:25.641 20:04:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.641 20:04:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.641 [2024-12-08 20:04:57.383333] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:25.641 20:04:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.641 20:04:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:25.642 20:04:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:25.642 20:04:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:25.642 20:04:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:25.642 20:04:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:25.642 20:04:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:25.642 20:04:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:25.642 20:04:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:25.642 20:04:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:25.642 20:04:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:25.642 20:04:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:25.642 20:04:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:25.642 20:04:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.642 20:04:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.642 20:04:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.642 20:04:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:25.642 "name": "Existed_Raid", 00:09:25.642 "uuid": "15a935f4-759e-43c7-a382-c4405426a25d", 00:09:25.642 "strip_size_kb": 0, 00:09:25.642 "state": "configuring", 00:09:25.642 "raid_level": "raid1", 00:09:25.642 "superblock": true, 00:09:25.642 "num_base_bdevs": 3, 00:09:25.642 "num_base_bdevs_discovered": 1, 00:09:25.642 "num_base_bdevs_operational": 3, 00:09:25.642 "base_bdevs_list": [ 00:09:25.642 { 00:09:25.642 "name": null, 00:09:25.642 "uuid": "8dd64dd7-49e1-438b-a70a-70972e2801ad", 00:09:25.642 "is_configured": false, 00:09:25.642 "data_offset": 0, 00:09:25.642 "data_size": 63488 00:09:25.642 }, 00:09:25.642 { 00:09:25.642 "name": null, 00:09:25.642 "uuid": "5e931ae8-b32d-4eec-9f88-c85c50f6c59a", 00:09:25.642 "is_configured": false, 00:09:25.642 "data_offset": 0, 00:09:25.642 "data_size": 63488 00:09:25.642 }, 00:09:25.642 { 00:09:25.642 "name": "BaseBdev3", 00:09:25.642 "uuid": "a390785a-e1b4-4e80-a5f9-b6df9741fa0b", 00:09:25.642 "is_configured": true, 00:09:25.642 "data_offset": 2048, 00:09:25.642 "data_size": 63488 00:09:25.642 } 00:09:25.642 ] 00:09:25.642 }' 00:09:25.642 20:04:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:25.642 20:04:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.212 20:04:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:26.212 20:04:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.212 20:04:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:26.212 20:04:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.212 20:04:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.212 20:04:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:26.212 20:04:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:26.212 20:04:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.212 20:04:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.212 [2024-12-08 20:04:57.988492] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:26.212 20:04:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.212 20:04:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:26.212 20:04:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:26.212 20:04:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:26.212 20:04:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:26.212 20:04:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:26.212 20:04:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:26.212 20:04:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:26.212 20:04:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:26.212 20:04:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:26.212 20:04:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:26.212 20:04:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:26.212 20:04:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.212 20:04:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.212 20:04:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:26.212 20:04:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.212 20:04:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:26.212 "name": "Existed_Raid", 00:09:26.212 "uuid": "15a935f4-759e-43c7-a382-c4405426a25d", 00:09:26.212 "strip_size_kb": 0, 00:09:26.212 "state": "configuring", 00:09:26.212 "raid_level": "raid1", 00:09:26.212 "superblock": true, 00:09:26.212 "num_base_bdevs": 3, 00:09:26.212 "num_base_bdevs_discovered": 2, 00:09:26.212 "num_base_bdevs_operational": 3, 00:09:26.212 "base_bdevs_list": [ 00:09:26.212 { 00:09:26.212 "name": null, 00:09:26.212 "uuid": "8dd64dd7-49e1-438b-a70a-70972e2801ad", 00:09:26.212 "is_configured": false, 00:09:26.212 "data_offset": 0, 00:09:26.212 "data_size": 63488 00:09:26.212 }, 00:09:26.212 { 00:09:26.212 "name": "BaseBdev2", 00:09:26.212 "uuid": "5e931ae8-b32d-4eec-9f88-c85c50f6c59a", 00:09:26.212 "is_configured": true, 00:09:26.212 "data_offset": 2048, 00:09:26.212 "data_size": 63488 00:09:26.212 }, 00:09:26.212 { 00:09:26.212 "name": "BaseBdev3", 00:09:26.212 "uuid": "a390785a-e1b4-4e80-a5f9-b6df9741fa0b", 00:09:26.212 "is_configured": true, 00:09:26.212 "data_offset": 2048, 00:09:26.212 "data_size": 63488 00:09:26.212 } 00:09:26.212 ] 00:09:26.212 }' 00:09:26.212 20:04:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:26.212 20:04:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.471 20:04:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:26.471 20:04:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:26.471 20:04:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.471 20:04:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.732 20:04:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.732 20:04:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:26.732 20:04:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:26.732 20:04:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:26.732 20:04:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.732 20:04:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.732 20:04:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.732 20:04:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 8dd64dd7-49e1-438b-a70a-70972e2801ad 00:09:26.732 20:04:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.732 20:04:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.732 [2024-12-08 20:04:58.570161] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:26.732 [2024-12-08 20:04:58.570389] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:26.732 [2024-12-08 20:04:58.570402] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:26.732 [2024-12-08 20:04:58.570681] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:26.732 [2024-12-08 20:04:58.570865] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:26.732 [2024-12-08 20:04:58.570886] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:09:26.732 NewBaseBdev 00:09:26.732 [2024-12-08 20:04:58.571065] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:26.732 20:04:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.732 20:04:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:26.732 20:04:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:09:26.732 20:04:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:26.732 20:04:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:26.732 20:04:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:26.732 20:04:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:26.732 20:04:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:26.732 20:04:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.732 20:04:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.732 20:04:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.732 20:04:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:26.732 20:04:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.732 20:04:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.732 [ 00:09:26.732 { 00:09:26.732 "name": "NewBaseBdev", 00:09:26.732 "aliases": [ 00:09:26.732 "8dd64dd7-49e1-438b-a70a-70972e2801ad" 00:09:26.732 ], 00:09:26.732 "product_name": "Malloc disk", 00:09:26.732 "block_size": 512, 00:09:26.732 "num_blocks": 65536, 00:09:26.732 "uuid": "8dd64dd7-49e1-438b-a70a-70972e2801ad", 00:09:26.732 "assigned_rate_limits": { 00:09:26.732 "rw_ios_per_sec": 0, 00:09:26.732 "rw_mbytes_per_sec": 0, 00:09:26.732 "r_mbytes_per_sec": 0, 00:09:26.732 "w_mbytes_per_sec": 0 00:09:26.732 }, 00:09:26.732 "claimed": true, 00:09:26.732 "claim_type": "exclusive_write", 00:09:26.732 "zoned": false, 00:09:26.732 "supported_io_types": { 00:09:26.732 "read": true, 00:09:26.732 "write": true, 00:09:26.732 "unmap": true, 00:09:26.732 "flush": true, 00:09:26.732 "reset": true, 00:09:26.732 "nvme_admin": false, 00:09:26.732 "nvme_io": false, 00:09:26.732 "nvme_io_md": false, 00:09:26.732 "write_zeroes": true, 00:09:26.732 "zcopy": true, 00:09:26.732 "get_zone_info": false, 00:09:26.732 "zone_management": false, 00:09:26.732 "zone_append": false, 00:09:26.732 "compare": false, 00:09:26.732 "compare_and_write": false, 00:09:26.732 "abort": true, 00:09:26.732 "seek_hole": false, 00:09:26.732 "seek_data": false, 00:09:26.732 "copy": true, 00:09:26.732 "nvme_iov_md": false 00:09:26.732 }, 00:09:26.732 "memory_domains": [ 00:09:26.732 { 00:09:26.732 "dma_device_id": "system", 00:09:26.732 "dma_device_type": 1 00:09:26.732 }, 00:09:26.732 { 00:09:26.732 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:26.732 "dma_device_type": 2 00:09:26.732 } 00:09:26.732 ], 00:09:26.732 "driver_specific": {} 00:09:26.732 } 00:09:26.732 ] 00:09:26.732 20:04:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.732 20:04:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:26.732 20:04:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:26.732 20:04:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:26.732 20:04:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:26.732 20:04:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:26.732 20:04:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:26.732 20:04:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:26.732 20:04:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:26.732 20:04:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:26.732 20:04:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:26.732 20:04:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:26.732 20:04:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:26.732 20:04:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.732 20:04:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:26.733 20:04:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.733 20:04:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.733 20:04:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:26.733 "name": "Existed_Raid", 00:09:26.733 "uuid": "15a935f4-759e-43c7-a382-c4405426a25d", 00:09:26.733 "strip_size_kb": 0, 00:09:26.733 "state": "online", 00:09:26.733 "raid_level": "raid1", 00:09:26.733 "superblock": true, 00:09:26.733 "num_base_bdevs": 3, 00:09:26.733 "num_base_bdevs_discovered": 3, 00:09:26.733 "num_base_bdevs_operational": 3, 00:09:26.733 "base_bdevs_list": [ 00:09:26.733 { 00:09:26.733 "name": "NewBaseBdev", 00:09:26.733 "uuid": "8dd64dd7-49e1-438b-a70a-70972e2801ad", 00:09:26.733 "is_configured": true, 00:09:26.733 "data_offset": 2048, 00:09:26.733 "data_size": 63488 00:09:26.733 }, 00:09:26.733 { 00:09:26.733 "name": "BaseBdev2", 00:09:26.733 "uuid": "5e931ae8-b32d-4eec-9f88-c85c50f6c59a", 00:09:26.733 "is_configured": true, 00:09:26.733 "data_offset": 2048, 00:09:26.733 "data_size": 63488 00:09:26.733 }, 00:09:26.733 { 00:09:26.733 "name": "BaseBdev3", 00:09:26.733 "uuid": "a390785a-e1b4-4e80-a5f9-b6df9741fa0b", 00:09:26.733 "is_configured": true, 00:09:26.733 "data_offset": 2048, 00:09:26.733 "data_size": 63488 00:09:26.733 } 00:09:26.733 ] 00:09:26.733 }' 00:09:26.733 20:04:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:26.733 20:04:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:27.303 20:04:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:27.303 20:04:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:27.303 20:04:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:27.303 20:04:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:27.303 20:04:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:27.303 20:04:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:27.303 20:04:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:27.303 20:04:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:27.303 20:04:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.303 20:04:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:27.303 [2024-12-08 20:04:59.045735] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:27.303 20:04:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.303 20:04:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:27.303 "name": "Existed_Raid", 00:09:27.303 "aliases": [ 00:09:27.303 "15a935f4-759e-43c7-a382-c4405426a25d" 00:09:27.303 ], 00:09:27.303 "product_name": "Raid Volume", 00:09:27.303 "block_size": 512, 00:09:27.303 "num_blocks": 63488, 00:09:27.303 "uuid": "15a935f4-759e-43c7-a382-c4405426a25d", 00:09:27.303 "assigned_rate_limits": { 00:09:27.303 "rw_ios_per_sec": 0, 00:09:27.303 "rw_mbytes_per_sec": 0, 00:09:27.303 "r_mbytes_per_sec": 0, 00:09:27.303 "w_mbytes_per_sec": 0 00:09:27.303 }, 00:09:27.303 "claimed": false, 00:09:27.303 "zoned": false, 00:09:27.303 "supported_io_types": { 00:09:27.303 "read": true, 00:09:27.303 "write": true, 00:09:27.303 "unmap": false, 00:09:27.303 "flush": false, 00:09:27.303 "reset": true, 00:09:27.303 "nvme_admin": false, 00:09:27.303 "nvme_io": false, 00:09:27.303 "nvme_io_md": false, 00:09:27.303 "write_zeroes": true, 00:09:27.303 "zcopy": false, 00:09:27.303 "get_zone_info": false, 00:09:27.303 "zone_management": false, 00:09:27.303 "zone_append": false, 00:09:27.303 "compare": false, 00:09:27.303 "compare_and_write": false, 00:09:27.303 "abort": false, 00:09:27.303 "seek_hole": false, 00:09:27.303 "seek_data": false, 00:09:27.303 "copy": false, 00:09:27.303 "nvme_iov_md": false 00:09:27.303 }, 00:09:27.303 "memory_domains": [ 00:09:27.303 { 00:09:27.303 "dma_device_id": "system", 00:09:27.303 "dma_device_type": 1 00:09:27.303 }, 00:09:27.303 { 00:09:27.303 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:27.303 "dma_device_type": 2 00:09:27.303 }, 00:09:27.303 { 00:09:27.303 "dma_device_id": "system", 00:09:27.303 "dma_device_type": 1 00:09:27.303 }, 00:09:27.303 { 00:09:27.303 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:27.303 "dma_device_type": 2 00:09:27.303 }, 00:09:27.303 { 00:09:27.303 "dma_device_id": "system", 00:09:27.303 "dma_device_type": 1 00:09:27.303 }, 00:09:27.303 { 00:09:27.303 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:27.303 "dma_device_type": 2 00:09:27.303 } 00:09:27.303 ], 00:09:27.303 "driver_specific": { 00:09:27.303 "raid": { 00:09:27.303 "uuid": "15a935f4-759e-43c7-a382-c4405426a25d", 00:09:27.303 "strip_size_kb": 0, 00:09:27.303 "state": "online", 00:09:27.303 "raid_level": "raid1", 00:09:27.303 "superblock": true, 00:09:27.303 "num_base_bdevs": 3, 00:09:27.303 "num_base_bdevs_discovered": 3, 00:09:27.303 "num_base_bdevs_operational": 3, 00:09:27.303 "base_bdevs_list": [ 00:09:27.303 { 00:09:27.303 "name": "NewBaseBdev", 00:09:27.303 "uuid": "8dd64dd7-49e1-438b-a70a-70972e2801ad", 00:09:27.303 "is_configured": true, 00:09:27.303 "data_offset": 2048, 00:09:27.303 "data_size": 63488 00:09:27.303 }, 00:09:27.303 { 00:09:27.303 "name": "BaseBdev2", 00:09:27.303 "uuid": "5e931ae8-b32d-4eec-9f88-c85c50f6c59a", 00:09:27.303 "is_configured": true, 00:09:27.303 "data_offset": 2048, 00:09:27.303 "data_size": 63488 00:09:27.303 }, 00:09:27.303 { 00:09:27.303 "name": "BaseBdev3", 00:09:27.303 "uuid": "a390785a-e1b4-4e80-a5f9-b6df9741fa0b", 00:09:27.303 "is_configured": true, 00:09:27.303 "data_offset": 2048, 00:09:27.303 "data_size": 63488 00:09:27.303 } 00:09:27.303 ] 00:09:27.303 } 00:09:27.303 } 00:09:27.303 }' 00:09:27.303 20:04:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:27.303 20:04:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:27.303 BaseBdev2 00:09:27.303 BaseBdev3' 00:09:27.303 20:04:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:27.303 20:04:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:27.303 20:04:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:27.303 20:04:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:27.303 20:04:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.303 20:04:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:27.303 20:04:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:27.303 20:04:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.303 20:04:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:27.303 20:04:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:27.303 20:04:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:27.303 20:04:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:27.304 20:04:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:27.304 20:04:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.304 20:04:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:27.304 20:04:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.304 20:04:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:27.304 20:04:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:27.304 20:04:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:27.304 20:04:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:27.304 20:04:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:27.304 20:04:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.304 20:04:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:27.564 20:04:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.564 20:04:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:27.564 20:04:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:27.564 20:04:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:27.564 20:04:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.564 20:04:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:27.564 [2024-12-08 20:04:59.293013] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:27.565 [2024-12-08 20:04:59.293050] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:27.565 [2024-12-08 20:04:59.293124] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:27.565 [2024-12-08 20:04:59.293413] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:27.565 [2024-12-08 20:04:59.293432] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:09:27.565 20:04:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.565 20:04:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 67840 00:09:27.565 20:04:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 67840 ']' 00:09:27.565 20:04:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 67840 00:09:27.565 20:04:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:09:27.565 20:04:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:27.565 20:04:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67840 00:09:27.565 20:04:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:27.565 20:04:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:27.565 killing process with pid 67840 00:09:27.565 20:04:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67840' 00:09:27.565 20:04:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 67840 00:09:27.565 [2024-12-08 20:04:59.343483] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:27.565 20:04:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 67840 00:09:27.824 [2024-12-08 20:04:59.641507] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:29.206 20:05:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:29.206 00:09:29.206 real 0m10.567s 00:09:29.206 user 0m16.843s 00:09:29.206 sys 0m1.849s 00:09:29.206 20:05:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:29.206 20:05:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.206 ************************************ 00:09:29.206 END TEST raid_state_function_test_sb 00:09:29.206 ************************************ 00:09:29.206 20:05:00 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:09:29.206 20:05:00 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:29.206 20:05:00 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:29.206 20:05:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:29.206 ************************************ 00:09:29.206 START TEST raid_superblock_test 00:09:29.206 ************************************ 00:09:29.206 20:05:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 3 00:09:29.206 20:05:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:09:29.206 20:05:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:09:29.206 20:05:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:29.206 20:05:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:29.206 20:05:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:29.206 20:05:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:29.206 20:05:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:29.206 20:05:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:29.206 20:05:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:29.206 20:05:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:29.206 20:05:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:29.206 20:05:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:29.206 20:05:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:29.206 20:05:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:09:29.206 20:05:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:09:29.206 20:05:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=68461 00:09:29.206 20:05:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:29.206 20:05:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 68461 00:09:29.206 20:05:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 68461 ']' 00:09:29.206 20:05:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:29.206 20:05:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:29.206 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:29.206 20:05:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:29.206 20:05:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:29.206 20:05:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.206 [2024-12-08 20:05:00.909375] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:09:29.206 [2024-12-08 20:05:00.909485] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68461 ] 00:09:29.206 [2024-12-08 20:05:01.083754] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:29.504 [2024-12-08 20:05:01.195163] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:29.504 [2024-12-08 20:05:01.391736] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:29.504 [2024-12-08 20:05:01.391778] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:29.767 20:05:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:29.767 20:05:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:09:29.767 20:05:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:29.767 20:05:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:29.767 20:05:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:29.767 20:05:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:29.767 20:05:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:29.767 20:05:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:29.767 20:05:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:29.767 20:05:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:29.767 20:05:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:29.767 20:05:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.767 20:05:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.027 malloc1 00:09:30.027 20:05:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.027 20:05:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:30.027 20:05:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.027 20:05:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.027 [2024-12-08 20:05:01.790181] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:30.027 [2024-12-08 20:05:01.790239] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:30.027 [2024-12-08 20:05:01.790259] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:30.027 [2024-12-08 20:05:01.790268] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:30.027 [2024-12-08 20:05:01.792469] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:30.027 [2024-12-08 20:05:01.792507] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:30.027 pt1 00:09:30.027 20:05:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.027 20:05:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:30.027 20:05:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:30.027 20:05:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:30.027 20:05:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:30.027 20:05:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:30.027 20:05:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:30.027 20:05:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:30.027 20:05:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:30.027 20:05:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:30.027 20:05:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.027 20:05:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.027 malloc2 00:09:30.027 20:05:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.027 20:05:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:30.027 20:05:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.027 20:05:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.027 [2024-12-08 20:05:01.845596] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:30.027 [2024-12-08 20:05:01.845662] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:30.027 [2024-12-08 20:05:01.845686] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:30.027 [2024-12-08 20:05:01.845695] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:30.027 [2024-12-08 20:05:01.847687] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:30.027 [2024-12-08 20:05:01.847724] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:30.027 pt2 00:09:30.027 20:05:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.027 20:05:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:30.027 20:05:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:30.027 20:05:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:09:30.028 20:05:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:09:30.028 20:05:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:09:30.028 20:05:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:30.028 20:05:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:30.028 20:05:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:30.028 20:05:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:09:30.028 20:05:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.028 20:05:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.028 malloc3 00:09:30.028 20:05:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.028 20:05:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:30.028 20:05:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.028 20:05:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.028 [2024-12-08 20:05:01.908769] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:30.028 [2024-12-08 20:05:01.908833] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:30.028 [2024-12-08 20:05:01.908853] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:30.028 [2024-12-08 20:05:01.908862] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:30.028 [2024-12-08 20:05:01.910858] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:30.028 [2024-12-08 20:05:01.910893] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:30.028 pt3 00:09:30.028 20:05:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.028 20:05:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:30.028 20:05:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:30.028 20:05:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:09:30.028 20:05:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.028 20:05:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.028 [2024-12-08 20:05:01.920801] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:30.028 [2024-12-08 20:05:01.922526] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:30.028 [2024-12-08 20:05:01.922657] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:30.028 [2024-12-08 20:05:01.922830] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:30.028 [2024-12-08 20:05:01.922850] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:30.028 [2024-12-08 20:05:01.923088] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:30.028 [2024-12-08 20:05:01.923288] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:30.028 [2024-12-08 20:05:01.923303] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:09:30.028 [2024-12-08 20:05:01.923458] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:30.028 20:05:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.028 20:05:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:30.028 20:05:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:30.028 20:05:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:30.028 20:05:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:30.028 20:05:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:30.028 20:05:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:30.028 20:05:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:30.028 20:05:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:30.028 20:05:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:30.028 20:05:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:30.028 20:05:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:30.028 20:05:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:30.028 20:05:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.028 20:05:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.028 20:05:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.028 20:05:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:30.028 "name": "raid_bdev1", 00:09:30.028 "uuid": "a2d79cf3-6d5b-44cc-925c-247440914112", 00:09:30.028 "strip_size_kb": 0, 00:09:30.028 "state": "online", 00:09:30.028 "raid_level": "raid1", 00:09:30.028 "superblock": true, 00:09:30.028 "num_base_bdevs": 3, 00:09:30.028 "num_base_bdevs_discovered": 3, 00:09:30.028 "num_base_bdevs_operational": 3, 00:09:30.028 "base_bdevs_list": [ 00:09:30.028 { 00:09:30.028 "name": "pt1", 00:09:30.028 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:30.028 "is_configured": true, 00:09:30.028 "data_offset": 2048, 00:09:30.028 "data_size": 63488 00:09:30.028 }, 00:09:30.028 { 00:09:30.028 "name": "pt2", 00:09:30.028 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:30.028 "is_configured": true, 00:09:30.028 "data_offset": 2048, 00:09:30.028 "data_size": 63488 00:09:30.028 }, 00:09:30.028 { 00:09:30.028 "name": "pt3", 00:09:30.028 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:30.028 "is_configured": true, 00:09:30.028 "data_offset": 2048, 00:09:30.028 "data_size": 63488 00:09:30.028 } 00:09:30.028 ] 00:09:30.028 }' 00:09:30.028 20:05:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:30.028 20:05:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.598 20:05:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:30.598 20:05:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:30.598 20:05:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:30.598 20:05:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:30.598 20:05:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:30.598 20:05:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:30.598 20:05:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:30.598 20:05:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:30.598 20:05:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.598 20:05:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.598 [2024-12-08 20:05:02.344380] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:30.598 20:05:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.598 20:05:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:30.598 "name": "raid_bdev1", 00:09:30.598 "aliases": [ 00:09:30.598 "a2d79cf3-6d5b-44cc-925c-247440914112" 00:09:30.598 ], 00:09:30.598 "product_name": "Raid Volume", 00:09:30.598 "block_size": 512, 00:09:30.598 "num_blocks": 63488, 00:09:30.598 "uuid": "a2d79cf3-6d5b-44cc-925c-247440914112", 00:09:30.598 "assigned_rate_limits": { 00:09:30.598 "rw_ios_per_sec": 0, 00:09:30.598 "rw_mbytes_per_sec": 0, 00:09:30.598 "r_mbytes_per_sec": 0, 00:09:30.598 "w_mbytes_per_sec": 0 00:09:30.598 }, 00:09:30.598 "claimed": false, 00:09:30.598 "zoned": false, 00:09:30.598 "supported_io_types": { 00:09:30.598 "read": true, 00:09:30.598 "write": true, 00:09:30.598 "unmap": false, 00:09:30.598 "flush": false, 00:09:30.598 "reset": true, 00:09:30.598 "nvme_admin": false, 00:09:30.598 "nvme_io": false, 00:09:30.598 "nvme_io_md": false, 00:09:30.598 "write_zeroes": true, 00:09:30.598 "zcopy": false, 00:09:30.598 "get_zone_info": false, 00:09:30.598 "zone_management": false, 00:09:30.598 "zone_append": false, 00:09:30.598 "compare": false, 00:09:30.598 "compare_and_write": false, 00:09:30.598 "abort": false, 00:09:30.598 "seek_hole": false, 00:09:30.598 "seek_data": false, 00:09:30.598 "copy": false, 00:09:30.598 "nvme_iov_md": false 00:09:30.598 }, 00:09:30.598 "memory_domains": [ 00:09:30.598 { 00:09:30.598 "dma_device_id": "system", 00:09:30.598 "dma_device_type": 1 00:09:30.598 }, 00:09:30.598 { 00:09:30.598 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:30.598 "dma_device_type": 2 00:09:30.598 }, 00:09:30.598 { 00:09:30.598 "dma_device_id": "system", 00:09:30.598 "dma_device_type": 1 00:09:30.598 }, 00:09:30.598 { 00:09:30.598 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:30.598 "dma_device_type": 2 00:09:30.598 }, 00:09:30.598 { 00:09:30.598 "dma_device_id": "system", 00:09:30.598 "dma_device_type": 1 00:09:30.598 }, 00:09:30.598 { 00:09:30.598 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:30.598 "dma_device_type": 2 00:09:30.598 } 00:09:30.598 ], 00:09:30.598 "driver_specific": { 00:09:30.598 "raid": { 00:09:30.598 "uuid": "a2d79cf3-6d5b-44cc-925c-247440914112", 00:09:30.598 "strip_size_kb": 0, 00:09:30.598 "state": "online", 00:09:30.598 "raid_level": "raid1", 00:09:30.598 "superblock": true, 00:09:30.598 "num_base_bdevs": 3, 00:09:30.598 "num_base_bdevs_discovered": 3, 00:09:30.598 "num_base_bdevs_operational": 3, 00:09:30.598 "base_bdevs_list": [ 00:09:30.598 { 00:09:30.598 "name": "pt1", 00:09:30.598 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:30.598 "is_configured": true, 00:09:30.598 "data_offset": 2048, 00:09:30.598 "data_size": 63488 00:09:30.598 }, 00:09:30.598 { 00:09:30.598 "name": "pt2", 00:09:30.598 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:30.598 "is_configured": true, 00:09:30.598 "data_offset": 2048, 00:09:30.598 "data_size": 63488 00:09:30.598 }, 00:09:30.598 { 00:09:30.598 "name": "pt3", 00:09:30.598 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:30.598 "is_configured": true, 00:09:30.598 "data_offset": 2048, 00:09:30.598 "data_size": 63488 00:09:30.598 } 00:09:30.598 ] 00:09:30.598 } 00:09:30.598 } 00:09:30.598 }' 00:09:30.598 20:05:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:30.598 20:05:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:30.598 pt2 00:09:30.598 pt3' 00:09:30.598 20:05:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:30.598 20:05:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:30.598 20:05:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:30.598 20:05:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:30.598 20:05:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:30.598 20:05:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.598 20:05:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.598 20:05:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.598 20:05:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:30.598 20:05:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:30.598 20:05:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:30.598 20:05:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:30.598 20:05:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:30.598 20:05:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.598 20:05:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.598 20:05:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.598 20:05:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:30.598 20:05:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:30.598 20:05:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:30.598 20:05:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:30.598 20:05:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.598 20:05:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:30.598 20:05:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.598 20:05:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.858 20:05:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:30.858 20:05:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:30.858 20:05:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:30.858 20:05:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:30.858 20:05:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.858 20:05:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.858 [2024-12-08 20:05:02.599844] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:30.858 20:05:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.858 20:05:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=a2d79cf3-6d5b-44cc-925c-247440914112 00:09:30.858 20:05:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z a2d79cf3-6d5b-44cc-925c-247440914112 ']' 00:09:30.858 20:05:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:30.858 20:05:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.858 20:05:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.858 [2024-12-08 20:05:02.643516] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:30.858 [2024-12-08 20:05:02.643582] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:30.858 [2024-12-08 20:05:02.643688] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:30.858 [2024-12-08 20:05:02.643810] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:30.858 [2024-12-08 20:05:02.643866] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:09:30.858 20:05:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.858 20:05:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:30.859 20:05:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.859 20:05:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.859 20:05:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:30.859 20:05:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.859 20:05:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:30.859 20:05:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:30.859 20:05:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:30.859 20:05:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:30.859 20:05:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.859 20:05:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.859 20:05:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.859 20:05:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:30.859 20:05:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:30.859 20:05:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.859 20:05:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.859 20:05:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.859 20:05:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:30.859 20:05:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:09:30.859 20:05:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.859 20:05:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.859 20:05:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.859 20:05:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:30.859 20:05:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:30.859 20:05:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.859 20:05:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.859 20:05:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.859 20:05:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:30.859 20:05:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:30.859 20:05:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:09:30.859 20:05:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:30.859 20:05:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:09:30.859 20:05:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:30.859 20:05:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:09:30.859 20:05:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:30.859 20:05:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:30.859 20:05:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.859 20:05:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.859 [2024-12-08 20:05:02.795320] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:30.859 [2024-12-08 20:05:02.797170] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:30.859 [2024-12-08 20:05:02.797231] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:09:30.859 [2024-12-08 20:05:02.797284] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:30.859 [2024-12-08 20:05:02.797333] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:30.859 [2024-12-08 20:05:02.797353] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:09:30.859 [2024-12-08 20:05:02.797369] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:30.859 [2024-12-08 20:05:02.797378] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:09:30.859 request: 00:09:30.859 { 00:09:30.859 "name": "raid_bdev1", 00:09:30.859 "raid_level": "raid1", 00:09:30.859 "base_bdevs": [ 00:09:30.859 "malloc1", 00:09:30.859 "malloc2", 00:09:30.859 "malloc3" 00:09:30.859 ], 00:09:30.859 "superblock": false, 00:09:30.859 "method": "bdev_raid_create", 00:09:30.859 "req_id": 1 00:09:30.859 } 00:09:30.859 Got JSON-RPC error response 00:09:30.859 response: 00:09:30.859 { 00:09:30.859 "code": -17, 00:09:30.859 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:30.859 } 00:09:30.859 20:05:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:30.859 20:05:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:09:30.859 20:05:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:30.859 20:05:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:30.859 20:05:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:30.859 20:05:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:30.859 20:05:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:30.859 20:05:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.860 20:05:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.860 20:05:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.119 20:05:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:31.119 20:05:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:31.119 20:05:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:31.119 20:05:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.119 20:05:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.119 [2024-12-08 20:05:02.855250] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:31.119 [2024-12-08 20:05:02.855346] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:31.119 [2024-12-08 20:05:02.855384] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:09:31.119 [2024-12-08 20:05:02.855418] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:31.119 [2024-12-08 20:05:02.857636] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:31.119 [2024-12-08 20:05:02.857709] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:31.119 [2024-12-08 20:05:02.857812] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:31.119 [2024-12-08 20:05:02.857912] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:31.119 pt1 00:09:31.119 20:05:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.119 20:05:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:09:31.119 20:05:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:31.119 20:05:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:31.119 20:05:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:31.119 20:05:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:31.120 20:05:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:31.120 20:05:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:31.120 20:05:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:31.120 20:05:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:31.120 20:05:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:31.120 20:05:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:31.120 20:05:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:31.120 20:05:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.120 20:05:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.120 20:05:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.120 20:05:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:31.120 "name": "raid_bdev1", 00:09:31.120 "uuid": "a2d79cf3-6d5b-44cc-925c-247440914112", 00:09:31.120 "strip_size_kb": 0, 00:09:31.120 "state": "configuring", 00:09:31.120 "raid_level": "raid1", 00:09:31.120 "superblock": true, 00:09:31.120 "num_base_bdevs": 3, 00:09:31.120 "num_base_bdevs_discovered": 1, 00:09:31.120 "num_base_bdevs_operational": 3, 00:09:31.120 "base_bdevs_list": [ 00:09:31.120 { 00:09:31.120 "name": "pt1", 00:09:31.120 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:31.120 "is_configured": true, 00:09:31.120 "data_offset": 2048, 00:09:31.120 "data_size": 63488 00:09:31.120 }, 00:09:31.120 { 00:09:31.120 "name": null, 00:09:31.120 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:31.120 "is_configured": false, 00:09:31.120 "data_offset": 2048, 00:09:31.120 "data_size": 63488 00:09:31.120 }, 00:09:31.120 { 00:09:31.120 "name": null, 00:09:31.120 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:31.120 "is_configured": false, 00:09:31.120 "data_offset": 2048, 00:09:31.120 "data_size": 63488 00:09:31.120 } 00:09:31.120 ] 00:09:31.120 }' 00:09:31.120 20:05:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:31.120 20:05:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.379 20:05:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:09:31.379 20:05:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:31.379 20:05:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.379 20:05:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.379 [2024-12-08 20:05:03.286510] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:31.379 [2024-12-08 20:05:03.286588] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:31.379 [2024-12-08 20:05:03.286612] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:09:31.379 [2024-12-08 20:05:03.286621] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:31.379 [2024-12-08 20:05:03.287076] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:31.379 [2024-12-08 20:05:03.287095] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:31.379 [2024-12-08 20:05:03.287209] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:31.379 [2024-12-08 20:05:03.287238] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:31.379 pt2 00:09:31.379 20:05:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.379 20:05:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:09:31.379 20:05:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.379 20:05:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.379 [2024-12-08 20:05:03.298497] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:09:31.379 20:05:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.379 20:05:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:09:31.379 20:05:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:31.379 20:05:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:31.379 20:05:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:31.379 20:05:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:31.379 20:05:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:31.379 20:05:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:31.380 20:05:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:31.380 20:05:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:31.380 20:05:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:31.380 20:05:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:31.380 20:05:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:31.380 20:05:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.380 20:05:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.380 20:05:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.380 20:05:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:31.380 "name": "raid_bdev1", 00:09:31.380 "uuid": "a2d79cf3-6d5b-44cc-925c-247440914112", 00:09:31.380 "strip_size_kb": 0, 00:09:31.380 "state": "configuring", 00:09:31.380 "raid_level": "raid1", 00:09:31.380 "superblock": true, 00:09:31.380 "num_base_bdevs": 3, 00:09:31.380 "num_base_bdevs_discovered": 1, 00:09:31.380 "num_base_bdevs_operational": 3, 00:09:31.380 "base_bdevs_list": [ 00:09:31.380 { 00:09:31.380 "name": "pt1", 00:09:31.380 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:31.380 "is_configured": true, 00:09:31.380 "data_offset": 2048, 00:09:31.380 "data_size": 63488 00:09:31.380 }, 00:09:31.380 { 00:09:31.380 "name": null, 00:09:31.380 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:31.380 "is_configured": false, 00:09:31.380 "data_offset": 0, 00:09:31.380 "data_size": 63488 00:09:31.380 }, 00:09:31.380 { 00:09:31.380 "name": null, 00:09:31.380 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:31.380 "is_configured": false, 00:09:31.380 "data_offset": 2048, 00:09:31.380 "data_size": 63488 00:09:31.380 } 00:09:31.380 ] 00:09:31.380 }' 00:09:31.380 20:05:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:31.639 20:05:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.899 20:05:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:31.899 20:05:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:31.899 20:05:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:31.899 20:05:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.899 20:05:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.899 [2024-12-08 20:05:03.713794] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:31.899 [2024-12-08 20:05:03.713909] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:31.899 [2024-12-08 20:05:03.713982] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:09:31.899 [2024-12-08 20:05:03.714028] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:31.899 [2024-12-08 20:05:03.714528] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:31.899 [2024-12-08 20:05:03.714597] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:31.899 [2024-12-08 20:05:03.714731] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:31.899 [2024-12-08 20:05:03.714811] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:31.899 pt2 00:09:31.899 20:05:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.899 20:05:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:31.899 20:05:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:31.899 20:05:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:31.899 20:05:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.899 20:05:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.899 [2024-12-08 20:05:03.725766] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:31.899 [2024-12-08 20:05:03.725857] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:31.899 [2024-12-08 20:05:03.725889] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:09:31.899 [2024-12-08 20:05:03.725931] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:31.899 [2024-12-08 20:05:03.726384] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:31.899 [2024-12-08 20:05:03.726461] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:31.899 [2024-12-08 20:05:03.726572] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:31.899 [2024-12-08 20:05:03.726625] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:31.899 [2024-12-08 20:05:03.726808] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:31.899 [2024-12-08 20:05:03.726854] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:31.899 [2024-12-08 20:05:03.727181] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:31.899 [2024-12-08 20:05:03.727458] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:31.899 [2024-12-08 20:05:03.727507] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:31.899 [2024-12-08 20:05:03.727746] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:31.899 pt3 00:09:31.899 20:05:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.899 20:05:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:31.899 20:05:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:31.899 20:05:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:31.899 20:05:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:31.899 20:05:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:31.899 20:05:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:31.899 20:05:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:31.899 20:05:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:31.899 20:05:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:31.899 20:05:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:31.899 20:05:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:31.899 20:05:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:31.899 20:05:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:31.899 20:05:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:31.899 20:05:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.899 20:05:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.899 20:05:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.899 20:05:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:31.899 "name": "raid_bdev1", 00:09:31.899 "uuid": "a2d79cf3-6d5b-44cc-925c-247440914112", 00:09:31.899 "strip_size_kb": 0, 00:09:31.899 "state": "online", 00:09:31.899 "raid_level": "raid1", 00:09:31.899 "superblock": true, 00:09:31.899 "num_base_bdevs": 3, 00:09:31.899 "num_base_bdevs_discovered": 3, 00:09:31.899 "num_base_bdevs_operational": 3, 00:09:31.899 "base_bdevs_list": [ 00:09:31.899 { 00:09:31.899 "name": "pt1", 00:09:31.899 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:31.899 "is_configured": true, 00:09:31.899 "data_offset": 2048, 00:09:31.899 "data_size": 63488 00:09:31.899 }, 00:09:31.899 { 00:09:31.899 "name": "pt2", 00:09:31.899 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:31.899 "is_configured": true, 00:09:31.899 "data_offset": 2048, 00:09:31.899 "data_size": 63488 00:09:31.899 }, 00:09:31.899 { 00:09:31.899 "name": "pt3", 00:09:31.899 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:31.899 "is_configured": true, 00:09:31.899 "data_offset": 2048, 00:09:31.899 "data_size": 63488 00:09:31.899 } 00:09:31.899 ] 00:09:31.899 }' 00:09:31.899 20:05:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:31.899 20:05:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.468 20:05:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:32.468 20:05:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:32.468 20:05:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:32.468 20:05:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:32.468 20:05:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:32.468 20:05:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:32.468 20:05:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:32.468 20:05:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:32.468 20:05:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.468 20:05:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.468 [2024-12-08 20:05:04.161361] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:32.468 20:05:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.468 20:05:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:32.468 "name": "raid_bdev1", 00:09:32.468 "aliases": [ 00:09:32.468 "a2d79cf3-6d5b-44cc-925c-247440914112" 00:09:32.468 ], 00:09:32.468 "product_name": "Raid Volume", 00:09:32.468 "block_size": 512, 00:09:32.468 "num_blocks": 63488, 00:09:32.468 "uuid": "a2d79cf3-6d5b-44cc-925c-247440914112", 00:09:32.468 "assigned_rate_limits": { 00:09:32.468 "rw_ios_per_sec": 0, 00:09:32.468 "rw_mbytes_per_sec": 0, 00:09:32.468 "r_mbytes_per_sec": 0, 00:09:32.468 "w_mbytes_per_sec": 0 00:09:32.468 }, 00:09:32.468 "claimed": false, 00:09:32.468 "zoned": false, 00:09:32.468 "supported_io_types": { 00:09:32.468 "read": true, 00:09:32.468 "write": true, 00:09:32.468 "unmap": false, 00:09:32.468 "flush": false, 00:09:32.468 "reset": true, 00:09:32.468 "nvme_admin": false, 00:09:32.468 "nvme_io": false, 00:09:32.468 "nvme_io_md": false, 00:09:32.468 "write_zeroes": true, 00:09:32.468 "zcopy": false, 00:09:32.468 "get_zone_info": false, 00:09:32.468 "zone_management": false, 00:09:32.468 "zone_append": false, 00:09:32.468 "compare": false, 00:09:32.468 "compare_and_write": false, 00:09:32.468 "abort": false, 00:09:32.468 "seek_hole": false, 00:09:32.468 "seek_data": false, 00:09:32.468 "copy": false, 00:09:32.468 "nvme_iov_md": false 00:09:32.468 }, 00:09:32.468 "memory_domains": [ 00:09:32.468 { 00:09:32.468 "dma_device_id": "system", 00:09:32.468 "dma_device_type": 1 00:09:32.468 }, 00:09:32.468 { 00:09:32.468 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:32.468 "dma_device_type": 2 00:09:32.468 }, 00:09:32.468 { 00:09:32.468 "dma_device_id": "system", 00:09:32.468 "dma_device_type": 1 00:09:32.468 }, 00:09:32.468 { 00:09:32.468 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:32.468 "dma_device_type": 2 00:09:32.468 }, 00:09:32.468 { 00:09:32.468 "dma_device_id": "system", 00:09:32.468 "dma_device_type": 1 00:09:32.468 }, 00:09:32.468 { 00:09:32.468 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:32.468 "dma_device_type": 2 00:09:32.468 } 00:09:32.468 ], 00:09:32.468 "driver_specific": { 00:09:32.468 "raid": { 00:09:32.468 "uuid": "a2d79cf3-6d5b-44cc-925c-247440914112", 00:09:32.468 "strip_size_kb": 0, 00:09:32.468 "state": "online", 00:09:32.468 "raid_level": "raid1", 00:09:32.468 "superblock": true, 00:09:32.468 "num_base_bdevs": 3, 00:09:32.468 "num_base_bdevs_discovered": 3, 00:09:32.468 "num_base_bdevs_operational": 3, 00:09:32.468 "base_bdevs_list": [ 00:09:32.468 { 00:09:32.468 "name": "pt1", 00:09:32.468 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:32.468 "is_configured": true, 00:09:32.468 "data_offset": 2048, 00:09:32.468 "data_size": 63488 00:09:32.468 }, 00:09:32.468 { 00:09:32.468 "name": "pt2", 00:09:32.468 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:32.468 "is_configured": true, 00:09:32.468 "data_offset": 2048, 00:09:32.468 "data_size": 63488 00:09:32.468 }, 00:09:32.468 { 00:09:32.468 "name": "pt3", 00:09:32.468 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:32.468 "is_configured": true, 00:09:32.468 "data_offset": 2048, 00:09:32.468 "data_size": 63488 00:09:32.468 } 00:09:32.468 ] 00:09:32.468 } 00:09:32.468 } 00:09:32.468 }' 00:09:32.468 20:05:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:32.468 20:05:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:32.468 pt2 00:09:32.468 pt3' 00:09:32.468 20:05:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:32.468 20:05:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:32.468 20:05:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:32.469 20:05:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:32.469 20:05:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.469 20:05:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.469 20:05:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:32.469 20:05:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.469 20:05:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:32.469 20:05:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:32.469 20:05:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:32.469 20:05:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:32.469 20:05:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.469 20:05:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.469 20:05:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:32.469 20:05:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.469 20:05:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:32.469 20:05:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:32.469 20:05:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:32.469 20:05:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:32.469 20:05:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.469 20:05:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.469 20:05:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:32.469 20:05:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.469 20:05:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:32.469 20:05:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:32.729 20:05:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:32.729 20:05:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:32.729 20:05:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.729 20:05:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.729 [2024-12-08 20:05:04.448831] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:32.729 20:05:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.729 20:05:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' a2d79cf3-6d5b-44cc-925c-247440914112 '!=' a2d79cf3-6d5b-44cc-925c-247440914112 ']' 00:09:32.729 20:05:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:09:32.729 20:05:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:32.729 20:05:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:32.729 20:05:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:09:32.729 20:05:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.729 20:05:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.729 [2024-12-08 20:05:04.496515] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:09:32.729 20:05:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.729 20:05:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:32.729 20:05:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:32.729 20:05:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:32.729 20:05:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:32.729 20:05:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:32.729 20:05:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:32.729 20:05:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:32.729 20:05:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:32.729 20:05:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:32.729 20:05:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:32.729 20:05:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:32.729 20:05:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:32.729 20:05:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.729 20:05:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.729 20:05:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.729 20:05:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:32.729 "name": "raid_bdev1", 00:09:32.729 "uuid": "a2d79cf3-6d5b-44cc-925c-247440914112", 00:09:32.729 "strip_size_kb": 0, 00:09:32.729 "state": "online", 00:09:32.729 "raid_level": "raid1", 00:09:32.729 "superblock": true, 00:09:32.729 "num_base_bdevs": 3, 00:09:32.729 "num_base_bdevs_discovered": 2, 00:09:32.729 "num_base_bdevs_operational": 2, 00:09:32.729 "base_bdevs_list": [ 00:09:32.729 { 00:09:32.729 "name": null, 00:09:32.729 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:32.729 "is_configured": false, 00:09:32.729 "data_offset": 0, 00:09:32.729 "data_size": 63488 00:09:32.729 }, 00:09:32.729 { 00:09:32.729 "name": "pt2", 00:09:32.729 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:32.729 "is_configured": true, 00:09:32.729 "data_offset": 2048, 00:09:32.729 "data_size": 63488 00:09:32.729 }, 00:09:32.729 { 00:09:32.729 "name": "pt3", 00:09:32.729 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:32.729 "is_configured": true, 00:09:32.729 "data_offset": 2048, 00:09:32.729 "data_size": 63488 00:09:32.729 } 00:09:32.729 ] 00:09:32.729 }' 00:09:32.729 20:05:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:32.729 20:05:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.989 20:05:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:32.989 20:05:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.989 20:05:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.989 [2024-12-08 20:05:04.951820] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:32.989 [2024-12-08 20:05:04.951851] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:32.989 [2024-12-08 20:05:04.951929] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:32.989 [2024-12-08 20:05:04.952010] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:32.989 [2024-12-08 20:05:04.952024] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:32.990 20:05:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.990 20:05:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:32.990 20:05:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.990 20:05:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:09:32.990 20:05:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.257 20:05:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.257 20:05:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:09:33.257 20:05:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:09:33.257 20:05:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:09:33.257 20:05:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:09:33.257 20:05:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:09:33.257 20:05:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.257 20:05:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.257 20:05:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.257 20:05:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:09:33.257 20:05:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:09:33.257 20:05:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:09:33.257 20:05:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.257 20:05:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.257 20:05:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.257 20:05:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:09:33.257 20:05:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:09:33.257 20:05:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:09:33.257 20:05:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:09:33.257 20:05:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:33.257 20:05:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.257 20:05:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.257 [2024-12-08 20:05:05.035631] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:33.257 [2024-12-08 20:05:05.035685] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:33.257 [2024-12-08 20:05:05.035703] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:09:33.257 [2024-12-08 20:05:05.035712] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:33.257 [2024-12-08 20:05:05.037923] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:33.257 [2024-12-08 20:05:05.037975] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:33.257 [2024-12-08 20:05:05.038049] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:33.257 [2024-12-08 20:05:05.038096] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:33.257 pt2 00:09:33.257 20:05:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.257 20:05:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:09:33.257 20:05:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:33.257 20:05:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:33.257 20:05:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:33.257 20:05:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:33.257 20:05:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:33.257 20:05:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:33.257 20:05:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:33.257 20:05:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:33.257 20:05:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:33.257 20:05:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:33.257 20:05:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:33.257 20:05:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.257 20:05:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.257 20:05:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.257 20:05:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:33.257 "name": "raid_bdev1", 00:09:33.257 "uuid": "a2d79cf3-6d5b-44cc-925c-247440914112", 00:09:33.257 "strip_size_kb": 0, 00:09:33.257 "state": "configuring", 00:09:33.257 "raid_level": "raid1", 00:09:33.257 "superblock": true, 00:09:33.257 "num_base_bdevs": 3, 00:09:33.257 "num_base_bdevs_discovered": 1, 00:09:33.257 "num_base_bdevs_operational": 2, 00:09:33.257 "base_bdevs_list": [ 00:09:33.257 { 00:09:33.257 "name": null, 00:09:33.257 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:33.257 "is_configured": false, 00:09:33.257 "data_offset": 2048, 00:09:33.257 "data_size": 63488 00:09:33.257 }, 00:09:33.257 { 00:09:33.257 "name": "pt2", 00:09:33.257 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:33.257 "is_configured": true, 00:09:33.257 "data_offset": 2048, 00:09:33.257 "data_size": 63488 00:09:33.257 }, 00:09:33.257 { 00:09:33.257 "name": null, 00:09:33.257 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:33.257 "is_configured": false, 00:09:33.257 "data_offset": 2048, 00:09:33.257 "data_size": 63488 00:09:33.257 } 00:09:33.257 ] 00:09:33.257 }' 00:09:33.257 20:05:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:33.257 20:05:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.517 20:05:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:09:33.517 20:05:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:09:33.517 20:05:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:09:33.517 20:05:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:33.517 20:05:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.517 20:05:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.517 [2024-12-08 20:05:05.490901] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:33.517 [2024-12-08 20:05:05.491025] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:33.517 [2024-12-08 20:05:05.491087] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:09:33.517 [2024-12-08 20:05:05.491147] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:33.518 [2024-12-08 20:05:05.491672] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:33.518 [2024-12-08 20:05:05.491738] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:33.518 [2024-12-08 20:05:05.491884] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:33.518 [2024-12-08 20:05:05.491964] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:33.518 [2024-12-08 20:05:05.492152] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:33.518 [2024-12-08 20:05:05.492195] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:33.518 [2024-12-08 20:05:05.492501] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:09:33.518 [2024-12-08 20:05:05.492716] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:33.518 [2024-12-08 20:05:05.492761] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:33.518 [2024-12-08 20:05:05.492998] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:33.776 pt3 00:09:33.776 20:05:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.776 20:05:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:33.776 20:05:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:33.776 20:05:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:33.776 20:05:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:33.776 20:05:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:33.776 20:05:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:33.776 20:05:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:33.776 20:05:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:33.776 20:05:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:33.776 20:05:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:33.776 20:05:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:33.776 20:05:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.776 20:05:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.776 20:05:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:33.776 20:05:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.776 20:05:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:33.776 "name": "raid_bdev1", 00:09:33.776 "uuid": "a2d79cf3-6d5b-44cc-925c-247440914112", 00:09:33.776 "strip_size_kb": 0, 00:09:33.776 "state": "online", 00:09:33.776 "raid_level": "raid1", 00:09:33.776 "superblock": true, 00:09:33.776 "num_base_bdevs": 3, 00:09:33.776 "num_base_bdevs_discovered": 2, 00:09:33.776 "num_base_bdevs_operational": 2, 00:09:33.776 "base_bdevs_list": [ 00:09:33.776 { 00:09:33.776 "name": null, 00:09:33.776 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:33.776 "is_configured": false, 00:09:33.776 "data_offset": 2048, 00:09:33.776 "data_size": 63488 00:09:33.776 }, 00:09:33.776 { 00:09:33.776 "name": "pt2", 00:09:33.776 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:33.776 "is_configured": true, 00:09:33.776 "data_offset": 2048, 00:09:33.776 "data_size": 63488 00:09:33.776 }, 00:09:33.776 { 00:09:33.776 "name": "pt3", 00:09:33.776 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:33.776 "is_configured": true, 00:09:33.776 "data_offset": 2048, 00:09:33.776 "data_size": 63488 00:09:33.776 } 00:09:33.776 ] 00:09:33.776 }' 00:09:33.776 20:05:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:33.776 20:05:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.036 20:05:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:34.036 20:05:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.036 20:05:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.036 [2024-12-08 20:05:05.918160] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:34.036 [2024-12-08 20:05:05.918192] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:34.036 [2024-12-08 20:05:05.918269] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:34.036 [2024-12-08 20:05:05.918331] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:34.036 [2024-12-08 20:05:05.918340] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:34.036 20:05:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.036 20:05:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.036 20:05:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.036 20:05:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.036 20:05:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:09:34.036 20:05:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.036 20:05:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:09:34.036 20:05:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:09:34.036 20:05:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:09:34.036 20:05:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:09:34.036 20:05:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:09:34.036 20:05:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.036 20:05:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.036 20:05:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.036 20:05:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:34.036 20:05:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.036 20:05:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.036 [2024-12-08 20:05:05.994069] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:34.036 [2024-12-08 20:05:05.994132] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:34.036 [2024-12-08 20:05:05.994152] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:09:34.036 [2024-12-08 20:05:05.994162] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:34.036 [2024-12-08 20:05:05.996531] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:34.036 [2024-12-08 20:05:05.996609] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:34.036 [2024-12-08 20:05:05.996737] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:34.036 [2024-12-08 20:05:05.996790] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:34.036 [2024-12-08 20:05:05.996933] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:09:34.036 [2024-12-08 20:05:05.996944] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:34.036 [2024-12-08 20:05:05.996980] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:09:34.036 [2024-12-08 20:05:05.997044] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:34.036 pt1 00:09:34.036 20:05:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.036 20:05:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:09:34.036 20:05:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:09:34.036 20:05:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:34.036 20:05:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:34.036 20:05:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:34.036 20:05:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:34.036 20:05:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:34.036 20:05:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:34.036 20:05:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:34.036 20:05:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:34.036 20:05:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:34.036 20:05:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.036 20:05:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:34.036 20:05:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.036 20:05:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.296 20:05:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.296 20:05:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:34.296 "name": "raid_bdev1", 00:09:34.296 "uuid": "a2d79cf3-6d5b-44cc-925c-247440914112", 00:09:34.296 "strip_size_kb": 0, 00:09:34.296 "state": "configuring", 00:09:34.296 "raid_level": "raid1", 00:09:34.296 "superblock": true, 00:09:34.296 "num_base_bdevs": 3, 00:09:34.296 "num_base_bdevs_discovered": 1, 00:09:34.296 "num_base_bdevs_operational": 2, 00:09:34.296 "base_bdevs_list": [ 00:09:34.296 { 00:09:34.296 "name": null, 00:09:34.296 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:34.296 "is_configured": false, 00:09:34.296 "data_offset": 2048, 00:09:34.296 "data_size": 63488 00:09:34.296 }, 00:09:34.296 { 00:09:34.296 "name": "pt2", 00:09:34.296 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:34.296 "is_configured": true, 00:09:34.296 "data_offset": 2048, 00:09:34.296 "data_size": 63488 00:09:34.296 }, 00:09:34.296 { 00:09:34.296 "name": null, 00:09:34.296 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:34.296 "is_configured": false, 00:09:34.296 "data_offset": 2048, 00:09:34.296 "data_size": 63488 00:09:34.296 } 00:09:34.296 ] 00:09:34.296 }' 00:09:34.296 20:05:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:34.296 20:05:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.556 20:05:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:09:34.556 20:05:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:09:34.556 20:05:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.556 20:05:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.556 20:05:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.556 20:05:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:09:34.556 20:05:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:34.556 20:05:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.556 20:05:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.556 [2024-12-08 20:05:06.469272] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:34.556 [2024-12-08 20:05:06.469385] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:34.556 [2024-12-08 20:05:06.469454] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:09:34.556 [2024-12-08 20:05:06.469495] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:34.556 [2024-12-08 20:05:06.470025] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:34.556 [2024-12-08 20:05:06.470085] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:34.556 [2024-12-08 20:05:06.470210] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:34.556 [2024-12-08 20:05:06.470264] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:34.556 [2024-12-08 20:05:06.470445] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:09:34.556 [2024-12-08 20:05:06.470486] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:34.556 [2024-12-08 20:05:06.470787] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:09:34.556 [2024-12-08 20:05:06.471008] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:09:34.556 [2024-12-08 20:05:06.471059] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:09:34.556 [2024-12-08 20:05:06.471341] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:34.556 pt3 00:09:34.556 20:05:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.556 20:05:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:34.556 20:05:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:34.556 20:05:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:34.556 20:05:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:34.556 20:05:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:34.556 20:05:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:34.556 20:05:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:34.556 20:05:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:34.556 20:05:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:34.556 20:05:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:34.556 20:05:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:34.556 20:05:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.556 20:05:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.556 20:05:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.556 20:05:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.556 20:05:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:34.556 "name": "raid_bdev1", 00:09:34.556 "uuid": "a2d79cf3-6d5b-44cc-925c-247440914112", 00:09:34.556 "strip_size_kb": 0, 00:09:34.556 "state": "online", 00:09:34.556 "raid_level": "raid1", 00:09:34.556 "superblock": true, 00:09:34.556 "num_base_bdevs": 3, 00:09:34.556 "num_base_bdevs_discovered": 2, 00:09:34.556 "num_base_bdevs_operational": 2, 00:09:34.556 "base_bdevs_list": [ 00:09:34.556 { 00:09:34.556 "name": null, 00:09:34.556 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:34.556 "is_configured": false, 00:09:34.556 "data_offset": 2048, 00:09:34.556 "data_size": 63488 00:09:34.556 }, 00:09:34.556 { 00:09:34.556 "name": "pt2", 00:09:34.556 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:34.556 "is_configured": true, 00:09:34.556 "data_offset": 2048, 00:09:34.556 "data_size": 63488 00:09:34.556 }, 00:09:34.556 { 00:09:34.556 "name": "pt3", 00:09:34.556 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:34.556 "is_configured": true, 00:09:34.556 "data_offset": 2048, 00:09:34.556 "data_size": 63488 00:09:34.556 } 00:09:34.556 ] 00:09:34.556 }' 00:09:34.556 20:05:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:34.556 20:05:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.124 20:05:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:09:35.124 20:05:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:09:35.124 20:05:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.124 20:05:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.124 20:05:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.124 20:05:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:09:35.124 20:05:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:35.124 20:05:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.124 20:05:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.124 20:05:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:09:35.124 [2024-12-08 20:05:06.936739] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:35.124 20:05:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.124 20:05:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' a2d79cf3-6d5b-44cc-925c-247440914112 '!=' a2d79cf3-6d5b-44cc-925c-247440914112 ']' 00:09:35.124 20:05:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 68461 00:09:35.124 20:05:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 68461 ']' 00:09:35.124 20:05:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 68461 00:09:35.124 20:05:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:09:35.124 20:05:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:35.124 20:05:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68461 00:09:35.124 20:05:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:35.125 20:05:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:35.125 20:05:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68461' 00:09:35.125 killing process with pid 68461 00:09:35.125 20:05:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 68461 00:09:35.125 [2024-12-08 20:05:07.010419] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:35.125 [2024-12-08 20:05:07.010568] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:35.125 20:05:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 68461 00:09:35.125 [2024-12-08 20:05:07.010637] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:35.125 [2024-12-08 20:05:07.010650] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:09:35.384 [2024-12-08 20:05:07.313332] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:36.808 20:05:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:36.808 00:09:36.808 real 0m7.612s 00:09:36.808 user 0m11.916s 00:09:36.808 sys 0m1.316s 00:09:36.808 ************************************ 00:09:36.808 END TEST raid_superblock_test 00:09:36.808 ************************************ 00:09:36.808 20:05:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:36.808 20:05:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.808 20:05:08 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:09:36.808 20:05:08 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:36.808 20:05:08 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:36.808 20:05:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:36.808 ************************************ 00:09:36.808 START TEST raid_read_error_test 00:09:36.808 ************************************ 00:09:36.808 20:05:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 3 read 00:09:36.808 20:05:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:09:36.808 20:05:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:36.808 20:05:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:36.808 20:05:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:36.808 20:05:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:36.808 20:05:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:36.808 20:05:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:36.808 20:05:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:36.808 20:05:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:36.808 20:05:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:36.808 20:05:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:36.808 20:05:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:36.808 20:05:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:36.808 20:05:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:36.808 20:05:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:36.808 20:05:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:36.808 20:05:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:36.808 20:05:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:36.808 20:05:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:36.808 20:05:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:36.808 20:05:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:36.809 20:05:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:09:36.809 20:05:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:09:36.809 20:05:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:36.809 20:05:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.54RxtJQs8X 00:09:36.809 20:05:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=68902 00:09:36.809 20:05:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:36.809 20:05:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 68902 00:09:36.809 20:05:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 68902 ']' 00:09:36.809 20:05:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:36.809 20:05:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:36.809 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:36.809 20:05:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:36.809 20:05:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:36.809 20:05:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.809 [2024-12-08 20:05:08.606547] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:09:36.809 [2024-12-08 20:05:08.606728] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68902 ] 00:09:36.809 [2024-12-08 20:05:08.777629] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:37.067 [2024-12-08 20:05:08.891244] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:37.326 [2024-12-08 20:05:09.094031] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:37.326 [2024-12-08 20:05:09.094072] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:37.586 20:05:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:37.586 20:05:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:37.586 20:05:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:37.586 20:05:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:37.586 20:05:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.586 20:05:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.586 BaseBdev1_malloc 00:09:37.586 20:05:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.586 20:05:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:37.586 20:05:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.586 20:05:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.586 true 00:09:37.586 20:05:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.586 20:05:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:37.586 20:05:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.586 20:05:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.586 [2024-12-08 20:05:09.492891] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:37.586 [2024-12-08 20:05:09.492998] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:37.586 [2024-12-08 20:05:09.493023] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:37.586 [2024-12-08 20:05:09.493035] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:37.586 [2024-12-08 20:05:09.495035] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:37.586 [2024-12-08 20:05:09.495075] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:37.586 BaseBdev1 00:09:37.586 20:05:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.586 20:05:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:37.586 20:05:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:37.586 20:05:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.586 20:05:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.586 BaseBdev2_malloc 00:09:37.586 20:05:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.586 20:05:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:37.586 20:05:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.586 20:05:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.586 true 00:09:37.586 20:05:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.586 20:05:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:37.586 20:05:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.586 20:05:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.586 [2024-12-08 20:05:09.559868] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:37.586 [2024-12-08 20:05:09.559996] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:37.586 [2024-12-08 20:05:09.560017] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:37.586 [2024-12-08 20:05:09.560028] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:37.847 [2024-12-08 20:05:09.562238] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:37.847 [2024-12-08 20:05:09.562276] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:37.847 BaseBdev2 00:09:37.847 20:05:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.847 20:05:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:37.847 20:05:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:37.847 20:05:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.847 20:05:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.847 BaseBdev3_malloc 00:09:37.847 20:05:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.847 20:05:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:37.847 20:05:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.847 20:05:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.847 true 00:09:37.847 20:05:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.847 20:05:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:37.847 20:05:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.847 20:05:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.847 [2024-12-08 20:05:09.639518] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:37.847 [2024-12-08 20:05:09.639628] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:37.847 [2024-12-08 20:05:09.639669] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:37.847 [2024-12-08 20:05:09.639706] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:37.847 [2024-12-08 20:05:09.641899] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:37.847 [2024-12-08 20:05:09.641991] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:37.847 BaseBdev3 00:09:37.847 20:05:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.847 20:05:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:37.847 20:05:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.847 20:05:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.847 [2024-12-08 20:05:09.651562] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:37.847 [2024-12-08 20:05:09.653480] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:37.847 [2024-12-08 20:05:09.653591] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:37.847 [2024-12-08 20:05:09.653833] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:37.847 [2024-12-08 20:05:09.653882] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:37.847 [2024-12-08 20:05:09.654162] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:09:37.847 [2024-12-08 20:05:09.654379] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:37.847 [2024-12-08 20:05:09.654442] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:37.847 [2024-12-08 20:05:09.654664] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:37.847 20:05:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.847 20:05:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:37.847 20:05:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:37.847 20:05:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:37.847 20:05:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:37.847 20:05:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:37.847 20:05:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:37.847 20:05:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:37.847 20:05:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:37.847 20:05:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:37.847 20:05:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:37.847 20:05:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.847 20:05:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:37.847 20:05:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.847 20:05:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.847 20:05:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.847 20:05:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:37.847 "name": "raid_bdev1", 00:09:37.847 "uuid": "77690b28-f2d4-46bb-ac3d-49f4b564ad04", 00:09:37.847 "strip_size_kb": 0, 00:09:37.847 "state": "online", 00:09:37.847 "raid_level": "raid1", 00:09:37.847 "superblock": true, 00:09:37.847 "num_base_bdevs": 3, 00:09:37.847 "num_base_bdevs_discovered": 3, 00:09:37.847 "num_base_bdevs_operational": 3, 00:09:37.847 "base_bdevs_list": [ 00:09:37.847 { 00:09:37.847 "name": "BaseBdev1", 00:09:37.847 "uuid": "33d25c97-b0ca-51a9-ac5c-14548ec0914d", 00:09:37.848 "is_configured": true, 00:09:37.848 "data_offset": 2048, 00:09:37.848 "data_size": 63488 00:09:37.848 }, 00:09:37.848 { 00:09:37.848 "name": "BaseBdev2", 00:09:37.848 "uuid": "9968142c-5550-54eb-ab35-3b5be8d53cdc", 00:09:37.848 "is_configured": true, 00:09:37.848 "data_offset": 2048, 00:09:37.848 "data_size": 63488 00:09:37.848 }, 00:09:37.848 { 00:09:37.848 "name": "BaseBdev3", 00:09:37.848 "uuid": "949ae784-3cf9-572b-95c4-6c768ccfe285", 00:09:37.848 "is_configured": true, 00:09:37.848 "data_offset": 2048, 00:09:37.848 "data_size": 63488 00:09:37.848 } 00:09:37.848 ] 00:09:37.848 }' 00:09:37.848 20:05:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:37.848 20:05:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.417 20:05:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:38.418 20:05:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:38.418 [2024-12-08 20:05:10.219831] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:09:39.359 20:05:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:39.359 20:05:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.359 20:05:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.359 20:05:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.359 20:05:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:39.359 20:05:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:09:39.359 20:05:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:09:39.359 20:05:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:39.359 20:05:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:39.359 20:05:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:39.359 20:05:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:39.359 20:05:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:39.359 20:05:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:39.359 20:05:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:39.359 20:05:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:39.359 20:05:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:39.359 20:05:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:39.359 20:05:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:39.359 20:05:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.359 20:05:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:39.359 20:05:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.359 20:05:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.359 20:05:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.359 20:05:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:39.359 "name": "raid_bdev1", 00:09:39.359 "uuid": "77690b28-f2d4-46bb-ac3d-49f4b564ad04", 00:09:39.359 "strip_size_kb": 0, 00:09:39.359 "state": "online", 00:09:39.359 "raid_level": "raid1", 00:09:39.359 "superblock": true, 00:09:39.359 "num_base_bdevs": 3, 00:09:39.359 "num_base_bdevs_discovered": 3, 00:09:39.359 "num_base_bdevs_operational": 3, 00:09:39.359 "base_bdevs_list": [ 00:09:39.359 { 00:09:39.359 "name": "BaseBdev1", 00:09:39.359 "uuid": "33d25c97-b0ca-51a9-ac5c-14548ec0914d", 00:09:39.359 "is_configured": true, 00:09:39.359 "data_offset": 2048, 00:09:39.359 "data_size": 63488 00:09:39.359 }, 00:09:39.359 { 00:09:39.359 "name": "BaseBdev2", 00:09:39.359 "uuid": "9968142c-5550-54eb-ab35-3b5be8d53cdc", 00:09:39.359 "is_configured": true, 00:09:39.359 "data_offset": 2048, 00:09:39.359 "data_size": 63488 00:09:39.359 }, 00:09:39.359 { 00:09:39.359 "name": "BaseBdev3", 00:09:39.359 "uuid": "949ae784-3cf9-572b-95c4-6c768ccfe285", 00:09:39.359 "is_configured": true, 00:09:39.359 "data_offset": 2048, 00:09:39.359 "data_size": 63488 00:09:39.359 } 00:09:39.359 ] 00:09:39.359 }' 00:09:39.359 20:05:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:39.359 20:05:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.931 20:05:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:39.931 20:05:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.931 20:05:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.931 [2024-12-08 20:05:11.609048] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:39.931 [2024-12-08 20:05:11.609144] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:39.931 [2024-12-08 20:05:11.611828] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:39.931 [2024-12-08 20:05:11.611936] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:39.931 [2024-12-08 20:05:11.612106] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:39.931 [2024-12-08 20:05:11.612160] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:39.931 { 00:09:39.931 "results": [ 00:09:39.931 { 00:09:39.931 "job": "raid_bdev1", 00:09:39.931 "core_mask": "0x1", 00:09:39.931 "workload": "randrw", 00:09:39.931 "percentage": 50, 00:09:39.931 "status": "finished", 00:09:39.931 "queue_depth": 1, 00:09:39.931 "io_size": 131072, 00:09:39.931 "runtime": 1.390231, 00:09:39.931 "iops": 13033.805173384855, 00:09:39.931 "mibps": 1629.225646673107, 00:09:39.931 "io_failed": 0, 00:09:39.931 "io_timeout": 0, 00:09:39.931 "avg_latency_us": 74.03311065482904, 00:09:39.931 "min_latency_us": 24.034934497816593, 00:09:39.931 "max_latency_us": 1452.380786026201 00:09:39.931 } 00:09:39.931 ], 00:09:39.931 "core_count": 1 00:09:39.931 } 00:09:39.931 20:05:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.931 20:05:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 68902 00:09:39.931 20:05:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 68902 ']' 00:09:39.931 20:05:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 68902 00:09:39.931 20:05:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:09:39.931 20:05:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:39.931 20:05:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68902 00:09:39.931 20:05:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:39.931 20:05:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:39.931 20:05:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68902' 00:09:39.931 killing process with pid 68902 00:09:39.931 20:05:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 68902 00:09:39.931 20:05:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 68902 00:09:39.931 [2024-12-08 20:05:11.659335] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:39.931 [2024-12-08 20:05:11.893590] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:41.315 20:05:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.54RxtJQs8X 00:09:41.315 20:05:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:41.315 20:05:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:41.315 20:05:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:09:41.315 20:05:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:09:41.315 20:05:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:41.315 20:05:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:41.315 ************************************ 00:09:41.315 END TEST raid_read_error_test 00:09:41.315 ************************************ 00:09:41.315 20:05:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:09:41.315 00:09:41.315 real 0m4.587s 00:09:41.315 user 0m5.477s 00:09:41.315 sys 0m0.550s 00:09:41.315 20:05:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:41.315 20:05:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.315 20:05:13 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:09:41.315 20:05:13 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:41.315 20:05:13 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:41.315 20:05:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:41.315 ************************************ 00:09:41.315 START TEST raid_write_error_test 00:09:41.315 ************************************ 00:09:41.315 20:05:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 3 write 00:09:41.315 20:05:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:09:41.315 20:05:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:41.315 20:05:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:41.315 20:05:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:41.315 20:05:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:41.315 20:05:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:41.315 20:05:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:41.315 20:05:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:41.315 20:05:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:41.315 20:05:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:41.315 20:05:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:41.315 20:05:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:41.315 20:05:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:41.315 20:05:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:41.315 20:05:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:41.315 20:05:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:41.315 20:05:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:41.315 20:05:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:41.315 20:05:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:41.315 20:05:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:41.315 20:05:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:41.315 20:05:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:09:41.315 20:05:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:09:41.315 20:05:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:41.315 20:05:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.FTsduiMLaw 00:09:41.315 20:05:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69053 00:09:41.315 20:05:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:41.315 20:05:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69053 00:09:41.315 20:05:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 69053 ']' 00:09:41.315 20:05:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:41.315 20:05:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:41.315 20:05:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:41.315 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:41.315 20:05:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:41.315 20:05:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.315 [2024-12-08 20:05:13.264697] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:09:41.315 [2024-12-08 20:05:13.264917] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69053 ] 00:09:41.576 [2024-12-08 20:05:13.483009] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:41.841 [2024-12-08 20:05:13.602437] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:41.841 [2024-12-08 20:05:13.810112] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:41.841 [2024-12-08 20:05:13.810177] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:42.417 20:05:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:42.417 20:05:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:42.417 20:05:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:42.417 20:05:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:42.417 20:05:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.417 20:05:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.417 BaseBdev1_malloc 00:09:42.417 20:05:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.417 20:05:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:42.417 20:05:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.417 20:05:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.417 true 00:09:42.417 20:05:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.417 20:05:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:42.417 20:05:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.417 20:05:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.417 [2024-12-08 20:05:14.158177] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:42.417 [2024-12-08 20:05:14.158235] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:42.417 [2024-12-08 20:05:14.158255] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:42.417 [2024-12-08 20:05:14.158265] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:42.417 [2024-12-08 20:05:14.160560] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:42.417 [2024-12-08 20:05:14.160645] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:42.417 BaseBdev1 00:09:42.417 20:05:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.417 20:05:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:42.417 20:05:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:42.417 20:05:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.417 20:05:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.417 BaseBdev2_malloc 00:09:42.417 20:05:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.417 20:05:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:42.417 20:05:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.417 20:05:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.417 true 00:09:42.417 20:05:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.417 20:05:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:42.417 20:05:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.417 20:05:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.417 [2024-12-08 20:05:14.214663] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:42.417 [2024-12-08 20:05:14.214759] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:42.417 [2024-12-08 20:05:14.214779] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:42.417 [2024-12-08 20:05:14.214790] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:42.417 [2024-12-08 20:05:14.216945] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:42.417 [2024-12-08 20:05:14.216994] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:42.417 BaseBdev2 00:09:42.417 20:05:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.417 20:05:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:42.417 20:05:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:42.417 20:05:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.417 20:05:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.417 BaseBdev3_malloc 00:09:42.417 20:05:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.417 20:05:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:42.417 20:05:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.417 20:05:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.417 true 00:09:42.417 20:05:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.417 20:05:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:42.417 20:05:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.417 20:05:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.417 [2024-12-08 20:05:14.295540] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:42.417 [2024-12-08 20:05:14.295637] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:42.417 [2024-12-08 20:05:14.295676] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:42.417 [2024-12-08 20:05:14.295687] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:42.417 [2024-12-08 20:05:14.297828] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:42.417 [2024-12-08 20:05:14.297871] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:42.417 BaseBdev3 00:09:42.417 20:05:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.417 20:05:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:42.417 20:05:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.417 20:05:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.417 [2024-12-08 20:05:14.307607] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:42.417 [2024-12-08 20:05:14.309588] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:42.417 [2024-12-08 20:05:14.309666] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:42.417 [2024-12-08 20:05:14.309876] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:42.417 [2024-12-08 20:05:14.309889] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:42.417 [2024-12-08 20:05:14.310167] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:09:42.417 [2024-12-08 20:05:14.310358] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:42.418 [2024-12-08 20:05:14.310377] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:42.418 [2024-12-08 20:05:14.310598] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:42.418 20:05:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.418 20:05:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:42.418 20:05:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:42.418 20:05:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:42.418 20:05:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:42.418 20:05:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:42.418 20:05:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:42.418 20:05:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:42.418 20:05:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:42.418 20:05:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:42.418 20:05:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:42.418 20:05:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:42.418 20:05:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:42.418 20:05:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.418 20:05:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.418 20:05:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.418 20:05:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:42.418 "name": "raid_bdev1", 00:09:42.418 "uuid": "aead36e1-f857-49c7-bc8a-c318880ae834", 00:09:42.418 "strip_size_kb": 0, 00:09:42.418 "state": "online", 00:09:42.418 "raid_level": "raid1", 00:09:42.418 "superblock": true, 00:09:42.418 "num_base_bdevs": 3, 00:09:42.418 "num_base_bdevs_discovered": 3, 00:09:42.418 "num_base_bdevs_operational": 3, 00:09:42.418 "base_bdevs_list": [ 00:09:42.418 { 00:09:42.418 "name": "BaseBdev1", 00:09:42.418 "uuid": "de341c33-d480-53d2-828d-311a299517ad", 00:09:42.418 "is_configured": true, 00:09:42.418 "data_offset": 2048, 00:09:42.418 "data_size": 63488 00:09:42.418 }, 00:09:42.418 { 00:09:42.418 "name": "BaseBdev2", 00:09:42.418 "uuid": "ff99af7a-d32b-57c5-933f-6d08e1ad97c5", 00:09:42.418 "is_configured": true, 00:09:42.418 "data_offset": 2048, 00:09:42.418 "data_size": 63488 00:09:42.418 }, 00:09:42.418 { 00:09:42.418 "name": "BaseBdev3", 00:09:42.418 "uuid": "6c53fb88-7cc4-57fd-b12f-885d62e09e59", 00:09:42.418 "is_configured": true, 00:09:42.418 "data_offset": 2048, 00:09:42.418 "data_size": 63488 00:09:42.418 } 00:09:42.418 ] 00:09:42.418 }' 00:09:42.418 20:05:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:42.418 20:05:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.988 20:05:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:42.988 20:05:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:42.988 [2024-12-08 20:05:14.856239] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:09:43.926 20:05:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:43.926 20:05:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.926 20:05:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.926 [2024-12-08 20:05:15.767274] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:09:43.926 [2024-12-08 20:05:15.767417] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:43.926 [2024-12-08 20:05:15.767673] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006700 00:09:43.926 20:05:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.926 20:05:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:43.926 20:05:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:09:43.926 20:05:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:09:43.926 20:05:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:09:43.926 20:05:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:43.926 20:05:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:43.926 20:05:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:43.926 20:05:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:43.926 20:05:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:43.926 20:05:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:43.926 20:05:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:43.926 20:05:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:43.926 20:05:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:43.926 20:05:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:43.926 20:05:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:43.926 20:05:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:43.926 20:05:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.926 20:05:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.926 20:05:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.926 20:05:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:43.926 "name": "raid_bdev1", 00:09:43.926 "uuid": "aead36e1-f857-49c7-bc8a-c318880ae834", 00:09:43.926 "strip_size_kb": 0, 00:09:43.926 "state": "online", 00:09:43.926 "raid_level": "raid1", 00:09:43.926 "superblock": true, 00:09:43.926 "num_base_bdevs": 3, 00:09:43.926 "num_base_bdevs_discovered": 2, 00:09:43.926 "num_base_bdevs_operational": 2, 00:09:43.926 "base_bdevs_list": [ 00:09:43.926 { 00:09:43.926 "name": null, 00:09:43.926 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:43.926 "is_configured": false, 00:09:43.926 "data_offset": 0, 00:09:43.926 "data_size": 63488 00:09:43.926 }, 00:09:43.926 { 00:09:43.926 "name": "BaseBdev2", 00:09:43.926 "uuid": "ff99af7a-d32b-57c5-933f-6d08e1ad97c5", 00:09:43.926 "is_configured": true, 00:09:43.926 "data_offset": 2048, 00:09:43.926 "data_size": 63488 00:09:43.926 }, 00:09:43.926 { 00:09:43.926 "name": "BaseBdev3", 00:09:43.926 "uuid": "6c53fb88-7cc4-57fd-b12f-885d62e09e59", 00:09:43.926 "is_configured": true, 00:09:43.926 "data_offset": 2048, 00:09:43.926 "data_size": 63488 00:09:43.926 } 00:09:43.926 ] 00:09:43.926 }' 00:09:43.926 20:05:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:43.926 20:05:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.494 20:05:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:44.494 20:05:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.494 20:05:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.494 [2024-12-08 20:05:16.226202] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:44.494 [2024-12-08 20:05:16.226240] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:44.494 [2024-12-08 20:05:16.229420] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:44.494 [2024-12-08 20:05:16.229541] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:44.494 [2024-12-08 20:05:16.229679] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:44.494 [2024-12-08 20:05:16.229743] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:44.494 { 00:09:44.494 "results": [ 00:09:44.494 { 00:09:44.494 "job": "raid_bdev1", 00:09:44.494 "core_mask": "0x1", 00:09:44.494 "workload": "randrw", 00:09:44.494 "percentage": 50, 00:09:44.494 "status": "finished", 00:09:44.494 "queue_depth": 1, 00:09:44.494 "io_size": 131072, 00:09:44.494 "runtime": 1.370736, 00:09:44.494 "iops": 14163.194079676903, 00:09:44.494 "mibps": 1770.399259959613, 00:09:44.494 "io_failed": 0, 00:09:44.494 "io_timeout": 0, 00:09:44.494 "avg_latency_us": 67.81430732695038, 00:09:44.494 "min_latency_us": 24.482096069868994, 00:09:44.494 "max_latency_us": 1609.7816593886462 00:09:44.494 } 00:09:44.494 ], 00:09:44.494 "core_count": 1 00:09:44.494 } 00:09:44.494 20:05:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.494 20:05:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69053 00:09:44.494 20:05:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 69053 ']' 00:09:44.494 20:05:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 69053 00:09:44.494 20:05:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:09:44.494 20:05:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:44.494 20:05:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69053 00:09:44.494 20:05:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:44.494 20:05:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:44.494 20:05:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69053' 00:09:44.494 killing process with pid 69053 00:09:44.494 20:05:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 69053 00:09:44.494 [2024-12-08 20:05:16.272348] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:44.494 20:05:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 69053 00:09:44.754 [2024-12-08 20:05:16.501890] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:46.135 20:05:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.FTsduiMLaw 00:09:46.135 20:05:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:46.135 20:05:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:46.135 20:05:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:09:46.135 20:05:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:09:46.135 20:05:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:46.135 20:05:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:46.135 20:05:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:09:46.135 00:09:46.135 real 0m4.545s 00:09:46.135 user 0m5.394s 00:09:46.135 sys 0m0.579s 00:09:46.135 20:05:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:46.135 20:05:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.135 ************************************ 00:09:46.135 END TEST raid_write_error_test 00:09:46.135 ************************************ 00:09:46.135 20:05:17 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:09:46.135 20:05:17 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:46.135 20:05:17 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:09:46.135 20:05:17 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:46.135 20:05:17 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:46.135 20:05:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:46.135 ************************************ 00:09:46.135 START TEST raid_state_function_test 00:09:46.135 ************************************ 00:09:46.135 20:05:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 4 false 00:09:46.135 20:05:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:09:46.135 20:05:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:09:46.135 20:05:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:46.135 20:05:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:46.135 20:05:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:46.135 20:05:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:46.135 20:05:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:46.135 20:05:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:46.135 20:05:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:46.135 20:05:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:46.135 20:05:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:46.136 20:05:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:46.136 20:05:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:46.136 20:05:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:46.136 20:05:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:46.136 20:05:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:09:46.136 20:05:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:46.136 20:05:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:46.136 20:05:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:09:46.136 20:05:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:46.136 20:05:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:46.136 20:05:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:46.136 20:05:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:46.136 20:05:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:46.136 20:05:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:09:46.136 20:05:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:46.136 20:05:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:46.136 20:05:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:46.136 20:05:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:46.136 Process raid pid: 69191 00:09:46.136 20:05:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=69191 00:09:46.136 20:05:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:46.136 20:05:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 69191' 00:09:46.136 20:05:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 69191 00:09:46.136 20:05:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 69191 ']' 00:09:46.136 20:05:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:46.136 20:05:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:46.136 20:05:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:46.136 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:46.136 20:05:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:46.136 20:05:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.136 [2024-12-08 20:05:17.875200] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:09:46.136 [2024-12-08 20:05:17.875410] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:46.136 [2024-12-08 20:05:18.048321] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:46.395 [2024-12-08 20:05:18.165336] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:46.395 [2024-12-08 20:05:18.371311] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:46.395 [2024-12-08 20:05:18.371397] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:46.965 20:05:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:46.965 20:05:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:09:46.965 20:05:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:46.965 20:05:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.965 20:05:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.965 [2024-12-08 20:05:18.740743] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:46.965 [2024-12-08 20:05:18.740866] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:46.965 [2024-12-08 20:05:18.740900] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:46.965 [2024-12-08 20:05:18.740925] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:46.965 [2024-12-08 20:05:18.740944] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:46.965 [2024-12-08 20:05:18.740975] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:46.965 [2024-12-08 20:05:18.741023] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:46.965 [2024-12-08 20:05:18.741048] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:46.965 20:05:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.965 20:05:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:46.965 20:05:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:46.965 20:05:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:46.965 20:05:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:46.965 20:05:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:46.965 20:05:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:46.965 20:05:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:46.965 20:05:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:46.965 20:05:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:46.965 20:05:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:46.965 20:05:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:46.965 20:05:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:46.965 20:05:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.965 20:05:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.965 20:05:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.965 20:05:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:46.965 "name": "Existed_Raid", 00:09:46.965 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:46.965 "strip_size_kb": 64, 00:09:46.965 "state": "configuring", 00:09:46.965 "raid_level": "raid0", 00:09:46.965 "superblock": false, 00:09:46.965 "num_base_bdevs": 4, 00:09:46.965 "num_base_bdevs_discovered": 0, 00:09:46.965 "num_base_bdevs_operational": 4, 00:09:46.965 "base_bdevs_list": [ 00:09:46.965 { 00:09:46.965 "name": "BaseBdev1", 00:09:46.965 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:46.965 "is_configured": false, 00:09:46.965 "data_offset": 0, 00:09:46.965 "data_size": 0 00:09:46.965 }, 00:09:46.965 { 00:09:46.965 "name": "BaseBdev2", 00:09:46.965 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:46.965 "is_configured": false, 00:09:46.965 "data_offset": 0, 00:09:46.965 "data_size": 0 00:09:46.965 }, 00:09:46.965 { 00:09:46.965 "name": "BaseBdev3", 00:09:46.965 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:46.965 "is_configured": false, 00:09:46.965 "data_offset": 0, 00:09:46.965 "data_size": 0 00:09:46.965 }, 00:09:46.965 { 00:09:46.965 "name": "BaseBdev4", 00:09:46.965 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:46.965 "is_configured": false, 00:09:46.965 "data_offset": 0, 00:09:46.965 "data_size": 0 00:09:46.965 } 00:09:46.965 ] 00:09:46.965 }' 00:09:46.965 20:05:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:46.965 20:05:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.225 20:05:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:47.225 20:05:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.225 20:05:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.225 [2024-12-08 20:05:19.187956] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:47.225 [2024-12-08 20:05:19.188065] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:47.225 20:05:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.225 20:05:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:47.225 20:05:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.225 20:05:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.225 [2024-12-08 20:05:19.199932] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:47.225 [2024-12-08 20:05:19.200041] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:47.225 [2024-12-08 20:05:19.200058] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:47.225 [2024-12-08 20:05:19.200069] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:47.225 [2024-12-08 20:05:19.200076] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:47.225 [2024-12-08 20:05:19.200086] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:47.225 [2024-12-08 20:05:19.200093] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:47.225 [2024-12-08 20:05:19.200117] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:47.485 20:05:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.486 20:05:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:47.486 20:05:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.486 20:05:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.486 [2024-12-08 20:05:19.246661] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:47.486 BaseBdev1 00:09:47.486 20:05:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.486 20:05:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:47.486 20:05:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:47.486 20:05:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:47.486 20:05:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:47.486 20:05:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:47.486 20:05:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:47.486 20:05:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:47.486 20:05:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.486 20:05:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.486 20:05:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.486 20:05:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:47.486 20:05:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.486 20:05:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.486 [ 00:09:47.486 { 00:09:47.486 "name": "BaseBdev1", 00:09:47.486 "aliases": [ 00:09:47.486 "c3464f3a-e02f-4346-a4c6-0e1af15bd75f" 00:09:47.486 ], 00:09:47.486 "product_name": "Malloc disk", 00:09:47.486 "block_size": 512, 00:09:47.486 "num_blocks": 65536, 00:09:47.486 "uuid": "c3464f3a-e02f-4346-a4c6-0e1af15bd75f", 00:09:47.486 "assigned_rate_limits": { 00:09:47.486 "rw_ios_per_sec": 0, 00:09:47.486 "rw_mbytes_per_sec": 0, 00:09:47.486 "r_mbytes_per_sec": 0, 00:09:47.486 "w_mbytes_per_sec": 0 00:09:47.486 }, 00:09:47.486 "claimed": true, 00:09:47.486 "claim_type": "exclusive_write", 00:09:47.486 "zoned": false, 00:09:47.486 "supported_io_types": { 00:09:47.486 "read": true, 00:09:47.486 "write": true, 00:09:47.486 "unmap": true, 00:09:47.486 "flush": true, 00:09:47.486 "reset": true, 00:09:47.486 "nvme_admin": false, 00:09:47.486 "nvme_io": false, 00:09:47.486 "nvme_io_md": false, 00:09:47.486 "write_zeroes": true, 00:09:47.486 "zcopy": true, 00:09:47.486 "get_zone_info": false, 00:09:47.486 "zone_management": false, 00:09:47.486 "zone_append": false, 00:09:47.486 "compare": false, 00:09:47.486 "compare_and_write": false, 00:09:47.486 "abort": true, 00:09:47.486 "seek_hole": false, 00:09:47.486 "seek_data": false, 00:09:47.486 "copy": true, 00:09:47.486 "nvme_iov_md": false 00:09:47.486 }, 00:09:47.486 "memory_domains": [ 00:09:47.486 { 00:09:47.486 "dma_device_id": "system", 00:09:47.486 "dma_device_type": 1 00:09:47.486 }, 00:09:47.486 { 00:09:47.486 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:47.486 "dma_device_type": 2 00:09:47.486 } 00:09:47.486 ], 00:09:47.486 "driver_specific": {} 00:09:47.486 } 00:09:47.486 ] 00:09:47.486 20:05:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.486 20:05:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:47.486 20:05:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:47.486 20:05:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:47.486 20:05:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:47.486 20:05:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:47.486 20:05:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:47.486 20:05:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:47.486 20:05:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:47.486 20:05:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:47.486 20:05:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:47.486 20:05:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:47.486 20:05:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:47.486 20:05:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:47.486 20:05:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.486 20:05:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.486 20:05:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.486 20:05:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:47.486 "name": "Existed_Raid", 00:09:47.486 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:47.486 "strip_size_kb": 64, 00:09:47.486 "state": "configuring", 00:09:47.486 "raid_level": "raid0", 00:09:47.486 "superblock": false, 00:09:47.486 "num_base_bdevs": 4, 00:09:47.486 "num_base_bdevs_discovered": 1, 00:09:47.486 "num_base_bdevs_operational": 4, 00:09:47.486 "base_bdevs_list": [ 00:09:47.486 { 00:09:47.486 "name": "BaseBdev1", 00:09:47.486 "uuid": "c3464f3a-e02f-4346-a4c6-0e1af15bd75f", 00:09:47.486 "is_configured": true, 00:09:47.486 "data_offset": 0, 00:09:47.486 "data_size": 65536 00:09:47.486 }, 00:09:47.486 { 00:09:47.486 "name": "BaseBdev2", 00:09:47.486 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:47.486 "is_configured": false, 00:09:47.486 "data_offset": 0, 00:09:47.486 "data_size": 0 00:09:47.486 }, 00:09:47.486 { 00:09:47.486 "name": "BaseBdev3", 00:09:47.486 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:47.486 "is_configured": false, 00:09:47.486 "data_offset": 0, 00:09:47.486 "data_size": 0 00:09:47.486 }, 00:09:47.486 { 00:09:47.486 "name": "BaseBdev4", 00:09:47.486 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:47.486 "is_configured": false, 00:09:47.486 "data_offset": 0, 00:09:47.486 "data_size": 0 00:09:47.486 } 00:09:47.486 ] 00:09:47.486 }' 00:09:47.486 20:05:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:47.486 20:05:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.057 20:05:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:48.057 20:05:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.057 20:05:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.057 [2024-12-08 20:05:19.761844] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:48.057 [2024-12-08 20:05:19.761985] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:48.057 20:05:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.057 20:05:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:48.057 20:05:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.057 20:05:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.057 [2024-12-08 20:05:19.773869] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:48.057 [2024-12-08 20:05:19.775814] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:48.057 [2024-12-08 20:05:19.775898] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:48.057 [2024-12-08 20:05:19.775929] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:48.057 [2024-12-08 20:05:19.775963] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:48.057 [2024-12-08 20:05:19.775984] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:48.057 [2024-12-08 20:05:19.776021] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:48.057 20:05:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.057 20:05:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:48.057 20:05:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:48.057 20:05:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:48.057 20:05:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:48.057 20:05:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:48.057 20:05:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:48.057 20:05:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:48.057 20:05:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:48.057 20:05:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:48.057 20:05:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:48.057 20:05:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:48.057 20:05:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:48.057 20:05:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:48.057 20:05:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.057 20:05:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.057 20:05:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:48.057 20:05:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.057 20:05:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:48.057 "name": "Existed_Raid", 00:09:48.057 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:48.057 "strip_size_kb": 64, 00:09:48.057 "state": "configuring", 00:09:48.057 "raid_level": "raid0", 00:09:48.057 "superblock": false, 00:09:48.057 "num_base_bdevs": 4, 00:09:48.057 "num_base_bdevs_discovered": 1, 00:09:48.057 "num_base_bdevs_operational": 4, 00:09:48.057 "base_bdevs_list": [ 00:09:48.057 { 00:09:48.057 "name": "BaseBdev1", 00:09:48.057 "uuid": "c3464f3a-e02f-4346-a4c6-0e1af15bd75f", 00:09:48.057 "is_configured": true, 00:09:48.057 "data_offset": 0, 00:09:48.057 "data_size": 65536 00:09:48.057 }, 00:09:48.057 { 00:09:48.057 "name": "BaseBdev2", 00:09:48.057 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:48.057 "is_configured": false, 00:09:48.057 "data_offset": 0, 00:09:48.057 "data_size": 0 00:09:48.057 }, 00:09:48.057 { 00:09:48.057 "name": "BaseBdev3", 00:09:48.057 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:48.057 "is_configured": false, 00:09:48.057 "data_offset": 0, 00:09:48.057 "data_size": 0 00:09:48.057 }, 00:09:48.057 { 00:09:48.057 "name": "BaseBdev4", 00:09:48.057 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:48.057 "is_configured": false, 00:09:48.057 "data_offset": 0, 00:09:48.057 "data_size": 0 00:09:48.057 } 00:09:48.057 ] 00:09:48.057 }' 00:09:48.057 20:05:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:48.057 20:05:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.317 20:05:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:48.317 20:05:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.317 20:05:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.317 [2024-12-08 20:05:20.259489] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:48.317 BaseBdev2 00:09:48.317 20:05:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.317 20:05:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:48.317 20:05:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:48.317 20:05:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:48.317 20:05:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:48.317 20:05:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:48.317 20:05:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:48.317 20:05:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:48.317 20:05:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.317 20:05:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.317 20:05:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.317 20:05:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:48.317 20:05:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.317 20:05:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.317 [ 00:09:48.317 { 00:09:48.317 "name": "BaseBdev2", 00:09:48.317 "aliases": [ 00:09:48.317 "5719cd11-490e-497e-96b6-fe4eca205ae3" 00:09:48.317 ], 00:09:48.317 "product_name": "Malloc disk", 00:09:48.317 "block_size": 512, 00:09:48.317 "num_blocks": 65536, 00:09:48.317 "uuid": "5719cd11-490e-497e-96b6-fe4eca205ae3", 00:09:48.317 "assigned_rate_limits": { 00:09:48.317 "rw_ios_per_sec": 0, 00:09:48.317 "rw_mbytes_per_sec": 0, 00:09:48.317 "r_mbytes_per_sec": 0, 00:09:48.317 "w_mbytes_per_sec": 0 00:09:48.317 }, 00:09:48.317 "claimed": true, 00:09:48.317 "claim_type": "exclusive_write", 00:09:48.317 "zoned": false, 00:09:48.317 "supported_io_types": { 00:09:48.317 "read": true, 00:09:48.317 "write": true, 00:09:48.317 "unmap": true, 00:09:48.317 "flush": true, 00:09:48.317 "reset": true, 00:09:48.317 "nvme_admin": false, 00:09:48.317 "nvme_io": false, 00:09:48.317 "nvme_io_md": false, 00:09:48.317 "write_zeroes": true, 00:09:48.317 "zcopy": true, 00:09:48.317 "get_zone_info": false, 00:09:48.317 "zone_management": false, 00:09:48.317 "zone_append": false, 00:09:48.317 "compare": false, 00:09:48.317 "compare_and_write": false, 00:09:48.577 "abort": true, 00:09:48.577 "seek_hole": false, 00:09:48.577 "seek_data": false, 00:09:48.577 "copy": true, 00:09:48.577 "nvme_iov_md": false 00:09:48.577 }, 00:09:48.577 "memory_domains": [ 00:09:48.577 { 00:09:48.577 "dma_device_id": "system", 00:09:48.577 "dma_device_type": 1 00:09:48.577 }, 00:09:48.577 { 00:09:48.577 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:48.577 "dma_device_type": 2 00:09:48.577 } 00:09:48.577 ], 00:09:48.577 "driver_specific": {} 00:09:48.577 } 00:09:48.577 ] 00:09:48.577 20:05:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.577 20:05:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:48.577 20:05:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:48.577 20:05:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:48.577 20:05:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:48.577 20:05:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:48.577 20:05:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:48.577 20:05:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:48.577 20:05:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:48.577 20:05:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:48.577 20:05:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:48.577 20:05:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:48.577 20:05:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:48.577 20:05:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:48.577 20:05:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:48.577 20:05:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:48.577 20:05:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.577 20:05:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.577 20:05:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.577 20:05:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:48.577 "name": "Existed_Raid", 00:09:48.577 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:48.577 "strip_size_kb": 64, 00:09:48.577 "state": "configuring", 00:09:48.577 "raid_level": "raid0", 00:09:48.577 "superblock": false, 00:09:48.577 "num_base_bdevs": 4, 00:09:48.577 "num_base_bdevs_discovered": 2, 00:09:48.577 "num_base_bdevs_operational": 4, 00:09:48.577 "base_bdevs_list": [ 00:09:48.577 { 00:09:48.577 "name": "BaseBdev1", 00:09:48.577 "uuid": "c3464f3a-e02f-4346-a4c6-0e1af15bd75f", 00:09:48.577 "is_configured": true, 00:09:48.577 "data_offset": 0, 00:09:48.577 "data_size": 65536 00:09:48.577 }, 00:09:48.577 { 00:09:48.577 "name": "BaseBdev2", 00:09:48.577 "uuid": "5719cd11-490e-497e-96b6-fe4eca205ae3", 00:09:48.577 "is_configured": true, 00:09:48.577 "data_offset": 0, 00:09:48.577 "data_size": 65536 00:09:48.577 }, 00:09:48.577 { 00:09:48.577 "name": "BaseBdev3", 00:09:48.577 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:48.577 "is_configured": false, 00:09:48.577 "data_offset": 0, 00:09:48.577 "data_size": 0 00:09:48.577 }, 00:09:48.577 { 00:09:48.577 "name": "BaseBdev4", 00:09:48.577 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:48.577 "is_configured": false, 00:09:48.577 "data_offset": 0, 00:09:48.577 "data_size": 0 00:09:48.577 } 00:09:48.577 ] 00:09:48.577 }' 00:09:48.577 20:05:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:48.577 20:05:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.837 20:05:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:48.837 20:05:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.837 20:05:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.837 [2024-12-08 20:05:20.774936] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:48.837 BaseBdev3 00:09:48.837 20:05:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.837 20:05:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:48.837 20:05:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:48.837 20:05:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:48.837 20:05:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:48.837 20:05:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:48.837 20:05:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:48.837 20:05:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:48.837 20:05:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.837 20:05:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.837 20:05:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.837 20:05:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:48.837 20:05:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.837 20:05:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.837 [ 00:09:48.837 { 00:09:48.837 "name": "BaseBdev3", 00:09:48.837 "aliases": [ 00:09:48.837 "f784e031-a60e-4710-bbc3-175ac98f535c" 00:09:48.837 ], 00:09:48.837 "product_name": "Malloc disk", 00:09:48.837 "block_size": 512, 00:09:48.837 "num_blocks": 65536, 00:09:48.837 "uuid": "f784e031-a60e-4710-bbc3-175ac98f535c", 00:09:48.837 "assigned_rate_limits": { 00:09:48.837 "rw_ios_per_sec": 0, 00:09:48.837 "rw_mbytes_per_sec": 0, 00:09:48.837 "r_mbytes_per_sec": 0, 00:09:48.837 "w_mbytes_per_sec": 0 00:09:48.837 }, 00:09:48.837 "claimed": true, 00:09:48.837 "claim_type": "exclusive_write", 00:09:48.837 "zoned": false, 00:09:48.837 "supported_io_types": { 00:09:48.837 "read": true, 00:09:48.837 "write": true, 00:09:48.837 "unmap": true, 00:09:48.837 "flush": true, 00:09:48.837 "reset": true, 00:09:48.837 "nvme_admin": false, 00:09:48.837 "nvme_io": false, 00:09:48.837 "nvme_io_md": false, 00:09:48.837 "write_zeroes": true, 00:09:48.837 "zcopy": true, 00:09:48.837 "get_zone_info": false, 00:09:48.837 "zone_management": false, 00:09:48.837 "zone_append": false, 00:09:48.837 "compare": false, 00:09:48.837 "compare_and_write": false, 00:09:48.837 "abort": true, 00:09:48.837 "seek_hole": false, 00:09:48.837 "seek_data": false, 00:09:48.837 "copy": true, 00:09:48.837 "nvme_iov_md": false 00:09:48.837 }, 00:09:48.837 "memory_domains": [ 00:09:49.097 { 00:09:49.097 "dma_device_id": "system", 00:09:49.097 "dma_device_type": 1 00:09:49.097 }, 00:09:49.097 { 00:09:49.097 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:49.097 "dma_device_type": 2 00:09:49.097 } 00:09:49.097 ], 00:09:49.097 "driver_specific": {} 00:09:49.097 } 00:09:49.097 ] 00:09:49.097 20:05:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.097 20:05:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:49.097 20:05:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:49.097 20:05:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:49.097 20:05:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:49.097 20:05:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:49.097 20:05:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:49.097 20:05:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:49.097 20:05:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:49.097 20:05:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:49.097 20:05:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:49.097 20:05:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:49.097 20:05:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:49.097 20:05:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:49.097 20:05:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:49.097 20:05:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:49.097 20:05:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.097 20:05:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.097 20:05:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.097 20:05:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:49.097 "name": "Existed_Raid", 00:09:49.097 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:49.097 "strip_size_kb": 64, 00:09:49.097 "state": "configuring", 00:09:49.097 "raid_level": "raid0", 00:09:49.097 "superblock": false, 00:09:49.097 "num_base_bdevs": 4, 00:09:49.097 "num_base_bdevs_discovered": 3, 00:09:49.097 "num_base_bdevs_operational": 4, 00:09:49.097 "base_bdevs_list": [ 00:09:49.097 { 00:09:49.097 "name": "BaseBdev1", 00:09:49.097 "uuid": "c3464f3a-e02f-4346-a4c6-0e1af15bd75f", 00:09:49.097 "is_configured": true, 00:09:49.097 "data_offset": 0, 00:09:49.097 "data_size": 65536 00:09:49.097 }, 00:09:49.097 { 00:09:49.097 "name": "BaseBdev2", 00:09:49.097 "uuid": "5719cd11-490e-497e-96b6-fe4eca205ae3", 00:09:49.097 "is_configured": true, 00:09:49.097 "data_offset": 0, 00:09:49.097 "data_size": 65536 00:09:49.097 }, 00:09:49.097 { 00:09:49.097 "name": "BaseBdev3", 00:09:49.097 "uuid": "f784e031-a60e-4710-bbc3-175ac98f535c", 00:09:49.097 "is_configured": true, 00:09:49.097 "data_offset": 0, 00:09:49.097 "data_size": 65536 00:09:49.097 }, 00:09:49.097 { 00:09:49.097 "name": "BaseBdev4", 00:09:49.097 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:49.097 "is_configured": false, 00:09:49.097 "data_offset": 0, 00:09:49.097 "data_size": 0 00:09:49.097 } 00:09:49.097 ] 00:09:49.097 }' 00:09:49.097 20:05:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:49.097 20:05:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.357 20:05:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:09:49.357 20:05:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.357 20:05:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.357 [2024-12-08 20:05:21.320350] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:49.357 [2024-12-08 20:05:21.320396] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:49.357 [2024-12-08 20:05:21.320405] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:09:49.357 [2024-12-08 20:05:21.320664] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:49.357 [2024-12-08 20:05:21.320820] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:49.357 [2024-12-08 20:05:21.320832] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:49.357 [2024-12-08 20:05:21.321138] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:49.357 BaseBdev4 00:09:49.357 20:05:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.357 20:05:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:09:49.357 20:05:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:09:49.357 20:05:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:49.357 20:05:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:49.357 20:05:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:49.357 20:05:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:49.357 20:05:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:49.357 20:05:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.357 20:05:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.357 20:05:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.616 20:05:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:09:49.616 20:05:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.616 20:05:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.616 [ 00:09:49.616 { 00:09:49.616 "name": "BaseBdev4", 00:09:49.616 "aliases": [ 00:09:49.616 "f8c8d6d8-cec2-470e-b661-54a415583153" 00:09:49.616 ], 00:09:49.616 "product_name": "Malloc disk", 00:09:49.616 "block_size": 512, 00:09:49.616 "num_blocks": 65536, 00:09:49.616 "uuid": "f8c8d6d8-cec2-470e-b661-54a415583153", 00:09:49.616 "assigned_rate_limits": { 00:09:49.616 "rw_ios_per_sec": 0, 00:09:49.616 "rw_mbytes_per_sec": 0, 00:09:49.616 "r_mbytes_per_sec": 0, 00:09:49.616 "w_mbytes_per_sec": 0 00:09:49.616 }, 00:09:49.616 "claimed": true, 00:09:49.616 "claim_type": "exclusive_write", 00:09:49.616 "zoned": false, 00:09:49.616 "supported_io_types": { 00:09:49.616 "read": true, 00:09:49.616 "write": true, 00:09:49.616 "unmap": true, 00:09:49.616 "flush": true, 00:09:49.616 "reset": true, 00:09:49.616 "nvme_admin": false, 00:09:49.616 "nvme_io": false, 00:09:49.616 "nvme_io_md": false, 00:09:49.616 "write_zeroes": true, 00:09:49.616 "zcopy": true, 00:09:49.616 "get_zone_info": false, 00:09:49.616 "zone_management": false, 00:09:49.616 "zone_append": false, 00:09:49.616 "compare": false, 00:09:49.616 "compare_and_write": false, 00:09:49.616 "abort": true, 00:09:49.616 "seek_hole": false, 00:09:49.616 "seek_data": false, 00:09:49.616 "copy": true, 00:09:49.616 "nvme_iov_md": false 00:09:49.616 }, 00:09:49.616 "memory_domains": [ 00:09:49.616 { 00:09:49.616 "dma_device_id": "system", 00:09:49.616 "dma_device_type": 1 00:09:49.616 }, 00:09:49.616 { 00:09:49.616 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:49.616 "dma_device_type": 2 00:09:49.616 } 00:09:49.616 ], 00:09:49.616 "driver_specific": {} 00:09:49.616 } 00:09:49.616 ] 00:09:49.616 20:05:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.616 20:05:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:49.616 20:05:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:49.616 20:05:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:49.616 20:05:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:09:49.616 20:05:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:49.616 20:05:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:49.616 20:05:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:49.616 20:05:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:49.616 20:05:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:49.617 20:05:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:49.617 20:05:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:49.617 20:05:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:49.617 20:05:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:49.617 20:05:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:49.617 20:05:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.617 20:05:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:49.617 20:05:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.617 20:05:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.617 20:05:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:49.617 "name": "Existed_Raid", 00:09:49.617 "uuid": "d21489f5-69db-4ab0-85ec-a499eace1deb", 00:09:49.617 "strip_size_kb": 64, 00:09:49.617 "state": "online", 00:09:49.617 "raid_level": "raid0", 00:09:49.617 "superblock": false, 00:09:49.617 "num_base_bdevs": 4, 00:09:49.617 "num_base_bdevs_discovered": 4, 00:09:49.617 "num_base_bdevs_operational": 4, 00:09:49.617 "base_bdevs_list": [ 00:09:49.617 { 00:09:49.617 "name": "BaseBdev1", 00:09:49.617 "uuid": "c3464f3a-e02f-4346-a4c6-0e1af15bd75f", 00:09:49.617 "is_configured": true, 00:09:49.617 "data_offset": 0, 00:09:49.617 "data_size": 65536 00:09:49.617 }, 00:09:49.617 { 00:09:49.617 "name": "BaseBdev2", 00:09:49.617 "uuid": "5719cd11-490e-497e-96b6-fe4eca205ae3", 00:09:49.617 "is_configured": true, 00:09:49.617 "data_offset": 0, 00:09:49.617 "data_size": 65536 00:09:49.617 }, 00:09:49.617 { 00:09:49.617 "name": "BaseBdev3", 00:09:49.617 "uuid": "f784e031-a60e-4710-bbc3-175ac98f535c", 00:09:49.617 "is_configured": true, 00:09:49.617 "data_offset": 0, 00:09:49.617 "data_size": 65536 00:09:49.617 }, 00:09:49.617 { 00:09:49.617 "name": "BaseBdev4", 00:09:49.617 "uuid": "f8c8d6d8-cec2-470e-b661-54a415583153", 00:09:49.617 "is_configured": true, 00:09:49.617 "data_offset": 0, 00:09:49.617 "data_size": 65536 00:09:49.617 } 00:09:49.617 ] 00:09:49.617 }' 00:09:49.617 20:05:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:49.617 20:05:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.876 20:05:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:49.876 20:05:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:49.876 20:05:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:49.876 20:05:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:49.876 20:05:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:49.876 20:05:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:49.876 20:05:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:49.876 20:05:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:49.876 20:05:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.876 20:05:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.876 [2024-12-08 20:05:21.799963] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:49.876 20:05:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.876 20:05:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:49.876 "name": "Existed_Raid", 00:09:49.876 "aliases": [ 00:09:49.876 "d21489f5-69db-4ab0-85ec-a499eace1deb" 00:09:49.876 ], 00:09:49.876 "product_name": "Raid Volume", 00:09:49.876 "block_size": 512, 00:09:49.876 "num_blocks": 262144, 00:09:49.876 "uuid": "d21489f5-69db-4ab0-85ec-a499eace1deb", 00:09:49.876 "assigned_rate_limits": { 00:09:49.876 "rw_ios_per_sec": 0, 00:09:49.876 "rw_mbytes_per_sec": 0, 00:09:49.876 "r_mbytes_per_sec": 0, 00:09:49.876 "w_mbytes_per_sec": 0 00:09:49.876 }, 00:09:49.876 "claimed": false, 00:09:49.876 "zoned": false, 00:09:49.876 "supported_io_types": { 00:09:49.876 "read": true, 00:09:49.876 "write": true, 00:09:49.876 "unmap": true, 00:09:49.876 "flush": true, 00:09:49.876 "reset": true, 00:09:49.876 "nvme_admin": false, 00:09:49.876 "nvme_io": false, 00:09:49.876 "nvme_io_md": false, 00:09:49.876 "write_zeroes": true, 00:09:49.876 "zcopy": false, 00:09:49.876 "get_zone_info": false, 00:09:49.876 "zone_management": false, 00:09:49.876 "zone_append": false, 00:09:49.876 "compare": false, 00:09:49.876 "compare_and_write": false, 00:09:49.876 "abort": false, 00:09:49.876 "seek_hole": false, 00:09:49.876 "seek_data": false, 00:09:49.876 "copy": false, 00:09:49.876 "nvme_iov_md": false 00:09:49.876 }, 00:09:49.876 "memory_domains": [ 00:09:49.876 { 00:09:49.876 "dma_device_id": "system", 00:09:49.876 "dma_device_type": 1 00:09:49.876 }, 00:09:49.876 { 00:09:49.876 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:49.876 "dma_device_type": 2 00:09:49.876 }, 00:09:49.876 { 00:09:49.876 "dma_device_id": "system", 00:09:49.876 "dma_device_type": 1 00:09:49.876 }, 00:09:49.876 { 00:09:49.876 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:49.876 "dma_device_type": 2 00:09:49.876 }, 00:09:49.876 { 00:09:49.876 "dma_device_id": "system", 00:09:49.876 "dma_device_type": 1 00:09:49.876 }, 00:09:49.876 { 00:09:49.876 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:49.876 "dma_device_type": 2 00:09:49.876 }, 00:09:49.876 { 00:09:49.876 "dma_device_id": "system", 00:09:49.876 "dma_device_type": 1 00:09:49.876 }, 00:09:49.876 { 00:09:49.876 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:49.876 "dma_device_type": 2 00:09:49.876 } 00:09:49.876 ], 00:09:49.876 "driver_specific": { 00:09:49.876 "raid": { 00:09:49.876 "uuid": "d21489f5-69db-4ab0-85ec-a499eace1deb", 00:09:49.876 "strip_size_kb": 64, 00:09:49.876 "state": "online", 00:09:49.876 "raid_level": "raid0", 00:09:49.876 "superblock": false, 00:09:49.876 "num_base_bdevs": 4, 00:09:49.876 "num_base_bdevs_discovered": 4, 00:09:49.876 "num_base_bdevs_operational": 4, 00:09:49.876 "base_bdevs_list": [ 00:09:49.876 { 00:09:49.876 "name": "BaseBdev1", 00:09:49.876 "uuid": "c3464f3a-e02f-4346-a4c6-0e1af15bd75f", 00:09:49.876 "is_configured": true, 00:09:49.876 "data_offset": 0, 00:09:49.876 "data_size": 65536 00:09:49.876 }, 00:09:49.876 { 00:09:49.876 "name": "BaseBdev2", 00:09:49.876 "uuid": "5719cd11-490e-497e-96b6-fe4eca205ae3", 00:09:49.876 "is_configured": true, 00:09:49.876 "data_offset": 0, 00:09:49.876 "data_size": 65536 00:09:49.876 }, 00:09:49.876 { 00:09:49.876 "name": "BaseBdev3", 00:09:49.876 "uuid": "f784e031-a60e-4710-bbc3-175ac98f535c", 00:09:49.876 "is_configured": true, 00:09:49.876 "data_offset": 0, 00:09:49.876 "data_size": 65536 00:09:49.876 }, 00:09:49.876 { 00:09:49.876 "name": "BaseBdev4", 00:09:49.876 "uuid": "f8c8d6d8-cec2-470e-b661-54a415583153", 00:09:49.876 "is_configured": true, 00:09:49.876 "data_offset": 0, 00:09:49.876 "data_size": 65536 00:09:49.876 } 00:09:49.876 ] 00:09:49.876 } 00:09:49.876 } 00:09:49.876 }' 00:09:49.876 20:05:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:50.135 20:05:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:50.135 BaseBdev2 00:09:50.135 BaseBdev3 00:09:50.135 BaseBdev4' 00:09:50.135 20:05:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:50.135 20:05:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:50.135 20:05:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:50.135 20:05:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:50.135 20:05:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:50.135 20:05:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.135 20:05:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.135 20:05:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.136 20:05:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:50.136 20:05:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:50.136 20:05:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:50.136 20:05:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:50.136 20:05:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.136 20:05:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.136 20:05:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:50.136 20:05:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.136 20:05:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:50.136 20:05:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:50.136 20:05:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:50.136 20:05:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:50.136 20:05:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:50.136 20:05:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.136 20:05:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.136 20:05:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.136 20:05:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:50.136 20:05:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:50.136 20:05:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:50.136 20:05:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:09:50.136 20:05:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:50.136 20:05:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.136 20:05:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.136 20:05:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.395 20:05:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:50.395 20:05:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:50.395 20:05:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:50.395 20:05:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.395 20:05:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.395 [2024-12-08 20:05:22.135266] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:50.395 [2024-12-08 20:05:22.135345] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:50.395 [2024-12-08 20:05:22.135405] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:50.395 20:05:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.395 20:05:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:50.395 20:05:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:09:50.395 20:05:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:50.395 20:05:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:50.395 20:05:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:50.395 20:05:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:09:50.395 20:05:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:50.395 20:05:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:50.395 20:05:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:50.395 20:05:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:50.395 20:05:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:50.395 20:05:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:50.395 20:05:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:50.395 20:05:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:50.395 20:05:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:50.395 20:05:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:50.395 20:05:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:50.395 20:05:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.395 20:05:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.395 20:05:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.395 20:05:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:50.395 "name": "Existed_Raid", 00:09:50.395 "uuid": "d21489f5-69db-4ab0-85ec-a499eace1deb", 00:09:50.395 "strip_size_kb": 64, 00:09:50.395 "state": "offline", 00:09:50.395 "raid_level": "raid0", 00:09:50.395 "superblock": false, 00:09:50.395 "num_base_bdevs": 4, 00:09:50.395 "num_base_bdevs_discovered": 3, 00:09:50.395 "num_base_bdevs_operational": 3, 00:09:50.395 "base_bdevs_list": [ 00:09:50.395 { 00:09:50.395 "name": null, 00:09:50.395 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:50.395 "is_configured": false, 00:09:50.395 "data_offset": 0, 00:09:50.395 "data_size": 65536 00:09:50.395 }, 00:09:50.395 { 00:09:50.395 "name": "BaseBdev2", 00:09:50.395 "uuid": "5719cd11-490e-497e-96b6-fe4eca205ae3", 00:09:50.395 "is_configured": true, 00:09:50.395 "data_offset": 0, 00:09:50.395 "data_size": 65536 00:09:50.395 }, 00:09:50.395 { 00:09:50.395 "name": "BaseBdev3", 00:09:50.395 "uuid": "f784e031-a60e-4710-bbc3-175ac98f535c", 00:09:50.395 "is_configured": true, 00:09:50.395 "data_offset": 0, 00:09:50.395 "data_size": 65536 00:09:50.395 }, 00:09:50.395 { 00:09:50.395 "name": "BaseBdev4", 00:09:50.395 "uuid": "f8c8d6d8-cec2-470e-b661-54a415583153", 00:09:50.395 "is_configured": true, 00:09:50.395 "data_offset": 0, 00:09:50.395 "data_size": 65536 00:09:50.395 } 00:09:50.395 ] 00:09:50.395 }' 00:09:50.395 20:05:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:50.395 20:05:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.963 20:05:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:50.963 20:05:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:50.963 20:05:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:50.963 20:05:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:50.963 20:05:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.963 20:05:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.963 20:05:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.963 20:05:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:50.963 20:05:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:50.963 20:05:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:50.963 20:05:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.963 20:05:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.963 [2024-12-08 20:05:22.728279] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:50.963 20:05:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.963 20:05:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:50.963 20:05:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:50.963 20:05:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:50.963 20:05:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:50.963 20:05:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.963 20:05:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.963 20:05:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.963 20:05:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:50.963 20:05:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:50.963 20:05:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:50.963 20:05:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.963 20:05:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.963 [2024-12-08 20:05:22.884728] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:51.223 20:05:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.223 20:05:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:51.223 20:05:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:51.223 20:05:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:51.223 20:05:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:51.223 20:05:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.223 20:05:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.223 20:05:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.223 20:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:51.223 20:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:51.223 20:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:09:51.223 20:05:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.223 20:05:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.223 [2024-12-08 20:05:23.040006] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:09:51.223 [2024-12-08 20:05:23.040056] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:51.223 20:05:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.223 20:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:51.223 20:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:51.223 20:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:51.223 20:05:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.223 20:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:51.223 20:05:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.223 20:05:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.223 20:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:51.223 20:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:51.223 20:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:09:51.223 20:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:51.223 20:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:51.223 20:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:51.223 20:05:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.223 20:05:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.483 BaseBdev2 00:09:51.483 20:05:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.483 20:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:51.483 20:05:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:51.483 20:05:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:51.483 20:05:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:51.483 20:05:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:51.483 20:05:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:51.483 20:05:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:51.483 20:05:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.483 20:05:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.483 20:05:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.483 20:05:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:51.483 20:05:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.483 20:05:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.483 [ 00:09:51.483 { 00:09:51.483 "name": "BaseBdev2", 00:09:51.483 "aliases": [ 00:09:51.483 "7e2f93a8-441d-4591-8128-8b84fd6da572" 00:09:51.483 ], 00:09:51.483 "product_name": "Malloc disk", 00:09:51.483 "block_size": 512, 00:09:51.483 "num_blocks": 65536, 00:09:51.483 "uuid": "7e2f93a8-441d-4591-8128-8b84fd6da572", 00:09:51.483 "assigned_rate_limits": { 00:09:51.483 "rw_ios_per_sec": 0, 00:09:51.483 "rw_mbytes_per_sec": 0, 00:09:51.483 "r_mbytes_per_sec": 0, 00:09:51.483 "w_mbytes_per_sec": 0 00:09:51.483 }, 00:09:51.483 "claimed": false, 00:09:51.483 "zoned": false, 00:09:51.483 "supported_io_types": { 00:09:51.483 "read": true, 00:09:51.483 "write": true, 00:09:51.483 "unmap": true, 00:09:51.483 "flush": true, 00:09:51.483 "reset": true, 00:09:51.483 "nvme_admin": false, 00:09:51.483 "nvme_io": false, 00:09:51.483 "nvme_io_md": false, 00:09:51.483 "write_zeroes": true, 00:09:51.483 "zcopy": true, 00:09:51.483 "get_zone_info": false, 00:09:51.483 "zone_management": false, 00:09:51.483 "zone_append": false, 00:09:51.483 "compare": false, 00:09:51.483 "compare_and_write": false, 00:09:51.483 "abort": true, 00:09:51.483 "seek_hole": false, 00:09:51.483 "seek_data": false, 00:09:51.483 "copy": true, 00:09:51.483 "nvme_iov_md": false 00:09:51.483 }, 00:09:51.483 "memory_domains": [ 00:09:51.483 { 00:09:51.483 "dma_device_id": "system", 00:09:51.483 "dma_device_type": 1 00:09:51.483 }, 00:09:51.483 { 00:09:51.483 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:51.483 "dma_device_type": 2 00:09:51.483 } 00:09:51.483 ], 00:09:51.483 "driver_specific": {} 00:09:51.483 } 00:09:51.483 ] 00:09:51.483 20:05:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.483 20:05:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:51.483 20:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:51.483 20:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:51.483 20:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:51.483 20:05:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.483 20:05:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.483 BaseBdev3 00:09:51.483 20:05:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.483 20:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:51.483 20:05:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:51.483 20:05:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:51.483 20:05:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:51.483 20:05:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:51.483 20:05:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:51.483 20:05:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:51.483 20:05:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.483 20:05:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.483 20:05:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.483 20:05:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:51.483 20:05:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.483 20:05:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.483 [ 00:09:51.483 { 00:09:51.483 "name": "BaseBdev3", 00:09:51.483 "aliases": [ 00:09:51.483 "9a14fcf7-7aff-4f9f-9984-1467718bf8ca" 00:09:51.483 ], 00:09:51.483 "product_name": "Malloc disk", 00:09:51.483 "block_size": 512, 00:09:51.483 "num_blocks": 65536, 00:09:51.483 "uuid": "9a14fcf7-7aff-4f9f-9984-1467718bf8ca", 00:09:51.483 "assigned_rate_limits": { 00:09:51.483 "rw_ios_per_sec": 0, 00:09:51.483 "rw_mbytes_per_sec": 0, 00:09:51.483 "r_mbytes_per_sec": 0, 00:09:51.483 "w_mbytes_per_sec": 0 00:09:51.483 }, 00:09:51.483 "claimed": false, 00:09:51.483 "zoned": false, 00:09:51.483 "supported_io_types": { 00:09:51.483 "read": true, 00:09:51.483 "write": true, 00:09:51.483 "unmap": true, 00:09:51.483 "flush": true, 00:09:51.483 "reset": true, 00:09:51.483 "nvme_admin": false, 00:09:51.483 "nvme_io": false, 00:09:51.483 "nvme_io_md": false, 00:09:51.483 "write_zeroes": true, 00:09:51.483 "zcopy": true, 00:09:51.483 "get_zone_info": false, 00:09:51.483 "zone_management": false, 00:09:51.483 "zone_append": false, 00:09:51.483 "compare": false, 00:09:51.483 "compare_and_write": false, 00:09:51.483 "abort": true, 00:09:51.483 "seek_hole": false, 00:09:51.483 "seek_data": false, 00:09:51.483 "copy": true, 00:09:51.483 "nvme_iov_md": false 00:09:51.483 }, 00:09:51.483 "memory_domains": [ 00:09:51.483 { 00:09:51.483 "dma_device_id": "system", 00:09:51.483 "dma_device_type": 1 00:09:51.483 }, 00:09:51.483 { 00:09:51.483 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:51.483 "dma_device_type": 2 00:09:51.483 } 00:09:51.483 ], 00:09:51.483 "driver_specific": {} 00:09:51.483 } 00:09:51.483 ] 00:09:51.483 20:05:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.483 20:05:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:51.483 20:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:51.483 20:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:51.483 20:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:09:51.483 20:05:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.483 20:05:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.483 BaseBdev4 00:09:51.483 20:05:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.483 20:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:09:51.483 20:05:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:09:51.483 20:05:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:51.483 20:05:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:51.483 20:05:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:51.483 20:05:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:51.483 20:05:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:51.483 20:05:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.483 20:05:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.483 20:05:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.483 20:05:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:09:51.483 20:05:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.483 20:05:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.483 [ 00:09:51.483 { 00:09:51.483 "name": "BaseBdev4", 00:09:51.483 "aliases": [ 00:09:51.483 "1bbc994c-e683-4a19-975d-8aaadd0d6073" 00:09:51.483 ], 00:09:51.483 "product_name": "Malloc disk", 00:09:51.483 "block_size": 512, 00:09:51.483 "num_blocks": 65536, 00:09:51.483 "uuid": "1bbc994c-e683-4a19-975d-8aaadd0d6073", 00:09:51.483 "assigned_rate_limits": { 00:09:51.483 "rw_ios_per_sec": 0, 00:09:51.483 "rw_mbytes_per_sec": 0, 00:09:51.484 "r_mbytes_per_sec": 0, 00:09:51.484 "w_mbytes_per_sec": 0 00:09:51.484 }, 00:09:51.484 "claimed": false, 00:09:51.484 "zoned": false, 00:09:51.484 "supported_io_types": { 00:09:51.484 "read": true, 00:09:51.484 "write": true, 00:09:51.484 "unmap": true, 00:09:51.484 "flush": true, 00:09:51.484 "reset": true, 00:09:51.484 "nvme_admin": false, 00:09:51.484 "nvme_io": false, 00:09:51.484 "nvme_io_md": false, 00:09:51.484 "write_zeroes": true, 00:09:51.484 "zcopy": true, 00:09:51.484 "get_zone_info": false, 00:09:51.484 "zone_management": false, 00:09:51.484 "zone_append": false, 00:09:51.484 "compare": false, 00:09:51.484 "compare_and_write": false, 00:09:51.484 "abort": true, 00:09:51.484 "seek_hole": false, 00:09:51.484 "seek_data": false, 00:09:51.484 "copy": true, 00:09:51.484 "nvme_iov_md": false 00:09:51.484 }, 00:09:51.484 "memory_domains": [ 00:09:51.484 { 00:09:51.484 "dma_device_id": "system", 00:09:51.484 "dma_device_type": 1 00:09:51.484 }, 00:09:51.484 { 00:09:51.484 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:51.484 "dma_device_type": 2 00:09:51.484 } 00:09:51.484 ], 00:09:51.484 "driver_specific": {} 00:09:51.484 } 00:09:51.484 ] 00:09:51.484 20:05:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.484 20:05:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:51.484 20:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:51.484 20:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:51.484 20:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:51.484 20:05:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.484 20:05:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.484 [2024-12-08 20:05:23.435451] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:51.484 [2024-12-08 20:05:23.435553] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:51.484 [2024-12-08 20:05:23.435600] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:51.484 [2024-12-08 20:05:23.437504] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:51.484 [2024-12-08 20:05:23.437625] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:51.484 20:05:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.484 20:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:51.484 20:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:51.484 20:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:51.484 20:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:51.484 20:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:51.484 20:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:51.484 20:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:51.484 20:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:51.484 20:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:51.484 20:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:51.484 20:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:51.484 20:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:51.484 20:05:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.484 20:05:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.743 20:05:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.743 20:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:51.743 "name": "Existed_Raid", 00:09:51.743 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:51.743 "strip_size_kb": 64, 00:09:51.743 "state": "configuring", 00:09:51.743 "raid_level": "raid0", 00:09:51.743 "superblock": false, 00:09:51.743 "num_base_bdevs": 4, 00:09:51.743 "num_base_bdevs_discovered": 3, 00:09:51.743 "num_base_bdevs_operational": 4, 00:09:51.743 "base_bdevs_list": [ 00:09:51.743 { 00:09:51.743 "name": "BaseBdev1", 00:09:51.743 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:51.743 "is_configured": false, 00:09:51.743 "data_offset": 0, 00:09:51.743 "data_size": 0 00:09:51.743 }, 00:09:51.743 { 00:09:51.743 "name": "BaseBdev2", 00:09:51.743 "uuid": "7e2f93a8-441d-4591-8128-8b84fd6da572", 00:09:51.743 "is_configured": true, 00:09:51.743 "data_offset": 0, 00:09:51.743 "data_size": 65536 00:09:51.743 }, 00:09:51.743 { 00:09:51.743 "name": "BaseBdev3", 00:09:51.743 "uuid": "9a14fcf7-7aff-4f9f-9984-1467718bf8ca", 00:09:51.743 "is_configured": true, 00:09:51.743 "data_offset": 0, 00:09:51.743 "data_size": 65536 00:09:51.743 }, 00:09:51.743 { 00:09:51.743 "name": "BaseBdev4", 00:09:51.743 "uuid": "1bbc994c-e683-4a19-975d-8aaadd0d6073", 00:09:51.743 "is_configured": true, 00:09:51.743 "data_offset": 0, 00:09:51.743 "data_size": 65536 00:09:51.743 } 00:09:51.743 ] 00:09:51.743 }' 00:09:51.743 20:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:51.743 20:05:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.003 20:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:52.003 20:05:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.003 20:05:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.003 [2024-12-08 20:05:23.866765] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:52.003 20:05:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.003 20:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:52.003 20:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:52.003 20:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:52.003 20:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:52.003 20:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:52.003 20:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:52.003 20:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:52.003 20:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:52.003 20:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:52.003 20:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:52.003 20:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:52.003 20:05:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.003 20:05:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.003 20:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:52.003 20:05:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.003 20:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:52.003 "name": "Existed_Raid", 00:09:52.003 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:52.003 "strip_size_kb": 64, 00:09:52.003 "state": "configuring", 00:09:52.003 "raid_level": "raid0", 00:09:52.003 "superblock": false, 00:09:52.003 "num_base_bdevs": 4, 00:09:52.003 "num_base_bdevs_discovered": 2, 00:09:52.003 "num_base_bdevs_operational": 4, 00:09:52.003 "base_bdevs_list": [ 00:09:52.003 { 00:09:52.003 "name": "BaseBdev1", 00:09:52.003 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:52.003 "is_configured": false, 00:09:52.003 "data_offset": 0, 00:09:52.003 "data_size": 0 00:09:52.003 }, 00:09:52.003 { 00:09:52.003 "name": null, 00:09:52.003 "uuid": "7e2f93a8-441d-4591-8128-8b84fd6da572", 00:09:52.003 "is_configured": false, 00:09:52.003 "data_offset": 0, 00:09:52.003 "data_size": 65536 00:09:52.003 }, 00:09:52.003 { 00:09:52.003 "name": "BaseBdev3", 00:09:52.003 "uuid": "9a14fcf7-7aff-4f9f-9984-1467718bf8ca", 00:09:52.003 "is_configured": true, 00:09:52.003 "data_offset": 0, 00:09:52.003 "data_size": 65536 00:09:52.003 }, 00:09:52.003 { 00:09:52.003 "name": "BaseBdev4", 00:09:52.003 "uuid": "1bbc994c-e683-4a19-975d-8aaadd0d6073", 00:09:52.003 "is_configured": true, 00:09:52.003 "data_offset": 0, 00:09:52.003 "data_size": 65536 00:09:52.003 } 00:09:52.003 ] 00:09:52.003 }' 00:09:52.003 20:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:52.003 20:05:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.572 20:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:52.572 20:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:52.572 20:05:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.572 20:05:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.572 20:05:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.572 20:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:52.572 20:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:52.572 20:05:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.572 20:05:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.572 [2024-12-08 20:05:24.308170] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:52.572 BaseBdev1 00:09:52.572 20:05:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.572 20:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:52.572 20:05:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:52.572 20:05:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:52.572 20:05:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:52.572 20:05:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:52.572 20:05:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:52.572 20:05:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:52.572 20:05:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.572 20:05:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.572 20:05:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.572 20:05:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:52.572 20:05:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.572 20:05:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.572 [ 00:09:52.572 { 00:09:52.572 "name": "BaseBdev1", 00:09:52.572 "aliases": [ 00:09:52.572 "ff6da2e9-8128-433c-a228-9b34eae85cad" 00:09:52.572 ], 00:09:52.572 "product_name": "Malloc disk", 00:09:52.572 "block_size": 512, 00:09:52.572 "num_blocks": 65536, 00:09:52.572 "uuid": "ff6da2e9-8128-433c-a228-9b34eae85cad", 00:09:52.572 "assigned_rate_limits": { 00:09:52.572 "rw_ios_per_sec": 0, 00:09:52.572 "rw_mbytes_per_sec": 0, 00:09:52.572 "r_mbytes_per_sec": 0, 00:09:52.572 "w_mbytes_per_sec": 0 00:09:52.572 }, 00:09:52.572 "claimed": true, 00:09:52.572 "claim_type": "exclusive_write", 00:09:52.572 "zoned": false, 00:09:52.572 "supported_io_types": { 00:09:52.572 "read": true, 00:09:52.572 "write": true, 00:09:52.572 "unmap": true, 00:09:52.572 "flush": true, 00:09:52.572 "reset": true, 00:09:52.572 "nvme_admin": false, 00:09:52.572 "nvme_io": false, 00:09:52.572 "nvme_io_md": false, 00:09:52.572 "write_zeroes": true, 00:09:52.572 "zcopy": true, 00:09:52.572 "get_zone_info": false, 00:09:52.572 "zone_management": false, 00:09:52.572 "zone_append": false, 00:09:52.572 "compare": false, 00:09:52.572 "compare_and_write": false, 00:09:52.572 "abort": true, 00:09:52.572 "seek_hole": false, 00:09:52.572 "seek_data": false, 00:09:52.572 "copy": true, 00:09:52.572 "nvme_iov_md": false 00:09:52.572 }, 00:09:52.572 "memory_domains": [ 00:09:52.572 { 00:09:52.572 "dma_device_id": "system", 00:09:52.572 "dma_device_type": 1 00:09:52.572 }, 00:09:52.572 { 00:09:52.572 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:52.572 "dma_device_type": 2 00:09:52.572 } 00:09:52.572 ], 00:09:52.572 "driver_specific": {} 00:09:52.572 } 00:09:52.572 ] 00:09:52.572 20:05:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.572 20:05:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:52.572 20:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:52.572 20:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:52.572 20:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:52.572 20:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:52.572 20:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:52.572 20:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:52.572 20:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:52.572 20:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:52.572 20:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:52.572 20:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:52.572 20:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:52.572 20:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:52.572 20:05:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.572 20:05:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.572 20:05:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.572 20:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:52.572 "name": "Existed_Raid", 00:09:52.572 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:52.572 "strip_size_kb": 64, 00:09:52.572 "state": "configuring", 00:09:52.572 "raid_level": "raid0", 00:09:52.572 "superblock": false, 00:09:52.572 "num_base_bdevs": 4, 00:09:52.572 "num_base_bdevs_discovered": 3, 00:09:52.572 "num_base_bdevs_operational": 4, 00:09:52.572 "base_bdevs_list": [ 00:09:52.572 { 00:09:52.572 "name": "BaseBdev1", 00:09:52.572 "uuid": "ff6da2e9-8128-433c-a228-9b34eae85cad", 00:09:52.572 "is_configured": true, 00:09:52.572 "data_offset": 0, 00:09:52.572 "data_size": 65536 00:09:52.572 }, 00:09:52.572 { 00:09:52.572 "name": null, 00:09:52.572 "uuid": "7e2f93a8-441d-4591-8128-8b84fd6da572", 00:09:52.572 "is_configured": false, 00:09:52.572 "data_offset": 0, 00:09:52.572 "data_size": 65536 00:09:52.572 }, 00:09:52.572 { 00:09:52.572 "name": "BaseBdev3", 00:09:52.572 "uuid": "9a14fcf7-7aff-4f9f-9984-1467718bf8ca", 00:09:52.572 "is_configured": true, 00:09:52.572 "data_offset": 0, 00:09:52.572 "data_size": 65536 00:09:52.572 }, 00:09:52.572 { 00:09:52.572 "name": "BaseBdev4", 00:09:52.572 "uuid": "1bbc994c-e683-4a19-975d-8aaadd0d6073", 00:09:52.572 "is_configured": true, 00:09:52.572 "data_offset": 0, 00:09:52.572 "data_size": 65536 00:09:52.572 } 00:09:52.572 ] 00:09:52.572 }' 00:09:52.572 20:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:52.572 20:05:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.871 20:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:52.871 20:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:52.871 20:05:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.871 20:05:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.871 20:05:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.871 20:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:52.871 20:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:52.871 20:05:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.871 20:05:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.871 [2024-12-08 20:05:24.815569] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:52.871 20:05:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.871 20:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:52.871 20:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:52.871 20:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:52.871 20:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:52.871 20:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:52.871 20:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:52.871 20:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:52.871 20:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:52.871 20:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:52.871 20:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:52.871 20:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:52.871 20:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:52.871 20:05:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.871 20:05:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.134 20:05:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.134 20:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:53.134 "name": "Existed_Raid", 00:09:53.134 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:53.134 "strip_size_kb": 64, 00:09:53.134 "state": "configuring", 00:09:53.134 "raid_level": "raid0", 00:09:53.134 "superblock": false, 00:09:53.134 "num_base_bdevs": 4, 00:09:53.134 "num_base_bdevs_discovered": 2, 00:09:53.134 "num_base_bdevs_operational": 4, 00:09:53.134 "base_bdevs_list": [ 00:09:53.134 { 00:09:53.134 "name": "BaseBdev1", 00:09:53.134 "uuid": "ff6da2e9-8128-433c-a228-9b34eae85cad", 00:09:53.134 "is_configured": true, 00:09:53.134 "data_offset": 0, 00:09:53.134 "data_size": 65536 00:09:53.134 }, 00:09:53.134 { 00:09:53.134 "name": null, 00:09:53.134 "uuid": "7e2f93a8-441d-4591-8128-8b84fd6da572", 00:09:53.134 "is_configured": false, 00:09:53.134 "data_offset": 0, 00:09:53.134 "data_size": 65536 00:09:53.134 }, 00:09:53.134 { 00:09:53.135 "name": null, 00:09:53.135 "uuid": "9a14fcf7-7aff-4f9f-9984-1467718bf8ca", 00:09:53.135 "is_configured": false, 00:09:53.135 "data_offset": 0, 00:09:53.135 "data_size": 65536 00:09:53.135 }, 00:09:53.135 { 00:09:53.135 "name": "BaseBdev4", 00:09:53.135 "uuid": "1bbc994c-e683-4a19-975d-8aaadd0d6073", 00:09:53.135 "is_configured": true, 00:09:53.135 "data_offset": 0, 00:09:53.135 "data_size": 65536 00:09:53.135 } 00:09:53.135 ] 00:09:53.135 }' 00:09:53.135 20:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:53.135 20:05:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.393 20:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:53.393 20:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:53.393 20:05:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.393 20:05:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.393 20:05:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.393 20:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:53.393 20:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:53.393 20:05:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.393 20:05:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.393 [2024-12-08 20:05:25.314713] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:53.393 20:05:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.393 20:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:53.393 20:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:53.393 20:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:53.393 20:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:53.393 20:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:53.393 20:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:53.393 20:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:53.393 20:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:53.393 20:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:53.393 20:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:53.393 20:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:53.393 20:05:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.393 20:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:53.393 20:05:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.393 20:05:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.652 20:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:53.652 "name": "Existed_Raid", 00:09:53.652 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:53.652 "strip_size_kb": 64, 00:09:53.652 "state": "configuring", 00:09:53.652 "raid_level": "raid0", 00:09:53.652 "superblock": false, 00:09:53.652 "num_base_bdevs": 4, 00:09:53.652 "num_base_bdevs_discovered": 3, 00:09:53.652 "num_base_bdevs_operational": 4, 00:09:53.652 "base_bdevs_list": [ 00:09:53.652 { 00:09:53.652 "name": "BaseBdev1", 00:09:53.652 "uuid": "ff6da2e9-8128-433c-a228-9b34eae85cad", 00:09:53.652 "is_configured": true, 00:09:53.652 "data_offset": 0, 00:09:53.652 "data_size": 65536 00:09:53.652 }, 00:09:53.652 { 00:09:53.652 "name": null, 00:09:53.652 "uuid": "7e2f93a8-441d-4591-8128-8b84fd6da572", 00:09:53.652 "is_configured": false, 00:09:53.652 "data_offset": 0, 00:09:53.652 "data_size": 65536 00:09:53.652 }, 00:09:53.652 { 00:09:53.652 "name": "BaseBdev3", 00:09:53.652 "uuid": "9a14fcf7-7aff-4f9f-9984-1467718bf8ca", 00:09:53.652 "is_configured": true, 00:09:53.652 "data_offset": 0, 00:09:53.652 "data_size": 65536 00:09:53.652 }, 00:09:53.652 { 00:09:53.652 "name": "BaseBdev4", 00:09:53.652 "uuid": "1bbc994c-e683-4a19-975d-8aaadd0d6073", 00:09:53.652 "is_configured": true, 00:09:53.652 "data_offset": 0, 00:09:53.652 "data_size": 65536 00:09:53.652 } 00:09:53.652 ] 00:09:53.652 }' 00:09:53.652 20:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:53.652 20:05:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.911 20:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:53.911 20:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:53.911 20:05:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.911 20:05:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.911 20:05:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.911 20:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:53.911 20:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:53.911 20:05:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.911 20:05:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.911 [2024-12-08 20:05:25.793935] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:53.911 20:05:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.911 20:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:53.911 20:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:53.911 20:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:53.911 20:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:53.911 20:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:53.911 20:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:53.911 20:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:53.911 20:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:53.911 20:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:53.911 20:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:54.171 20:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:54.171 20:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:54.171 20:05:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.171 20:05:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.171 20:05:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.171 20:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:54.171 "name": "Existed_Raid", 00:09:54.171 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:54.171 "strip_size_kb": 64, 00:09:54.171 "state": "configuring", 00:09:54.171 "raid_level": "raid0", 00:09:54.171 "superblock": false, 00:09:54.171 "num_base_bdevs": 4, 00:09:54.171 "num_base_bdevs_discovered": 2, 00:09:54.171 "num_base_bdevs_operational": 4, 00:09:54.171 "base_bdevs_list": [ 00:09:54.171 { 00:09:54.171 "name": null, 00:09:54.171 "uuid": "ff6da2e9-8128-433c-a228-9b34eae85cad", 00:09:54.171 "is_configured": false, 00:09:54.171 "data_offset": 0, 00:09:54.171 "data_size": 65536 00:09:54.171 }, 00:09:54.171 { 00:09:54.171 "name": null, 00:09:54.171 "uuid": "7e2f93a8-441d-4591-8128-8b84fd6da572", 00:09:54.171 "is_configured": false, 00:09:54.171 "data_offset": 0, 00:09:54.171 "data_size": 65536 00:09:54.171 }, 00:09:54.171 { 00:09:54.171 "name": "BaseBdev3", 00:09:54.171 "uuid": "9a14fcf7-7aff-4f9f-9984-1467718bf8ca", 00:09:54.171 "is_configured": true, 00:09:54.171 "data_offset": 0, 00:09:54.171 "data_size": 65536 00:09:54.171 }, 00:09:54.171 { 00:09:54.171 "name": "BaseBdev4", 00:09:54.171 "uuid": "1bbc994c-e683-4a19-975d-8aaadd0d6073", 00:09:54.171 "is_configured": true, 00:09:54.171 "data_offset": 0, 00:09:54.171 "data_size": 65536 00:09:54.171 } 00:09:54.171 ] 00:09:54.171 }' 00:09:54.171 20:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:54.171 20:05:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.431 20:05:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:54.431 20:05:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.431 20:05:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.431 20:05:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:54.431 20:05:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.431 20:05:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:54.431 20:05:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:54.431 20:05:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.431 20:05:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.431 [2024-12-08 20:05:26.377402] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:54.431 20:05:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.431 20:05:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:54.431 20:05:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:54.431 20:05:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:54.431 20:05:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:54.431 20:05:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:54.431 20:05:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:54.431 20:05:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:54.431 20:05:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:54.431 20:05:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:54.431 20:05:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:54.431 20:05:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:54.431 20:05:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.431 20:05:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.431 20:05:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:54.431 20:05:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.691 20:05:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:54.691 "name": "Existed_Raid", 00:09:54.691 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:54.691 "strip_size_kb": 64, 00:09:54.691 "state": "configuring", 00:09:54.691 "raid_level": "raid0", 00:09:54.691 "superblock": false, 00:09:54.691 "num_base_bdevs": 4, 00:09:54.691 "num_base_bdevs_discovered": 3, 00:09:54.691 "num_base_bdevs_operational": 4, 00:09:54.691 "base_bdevs_list": [ 00:09:54.691 { 00:09:54.691 "name": null, 00:09:54.691 "uuid": "ff6da2e9-8128-433c-a228-9b34eae85cad", 00:09:54.691 "is_configured": false, 00:09:54.691 "data_offset": 0, 00:09:54.691 "data_size": 65536 00:09:54.691 }, 00:09:54.691 { 00:09:54.691 "name": "BaseBdev2", 00:09:54.691 "uuid": "7e2f93a8-441d-4591-8128-8b84fd6da572", 00:09:54.691 "is_configured": true, 00:09:54.691 "data_offset": 0, 00:09:54.691 "data_size": 65536 00:09:54.691 }, 00:09:54.691 { 00:09:54.691 "name": "BaseBdev3", 00:09:54.691 "uuid": "9a14fcf7-7aff-4f9f-9984-1467718bf8ca", 00:09:54.691 "is_configured": true, 00:09:54.691 "data_offset": 0, 00:09:54.691 "data_size": 65536 00:09:54.691 }, 00:09:54.691 { 00:09:54.691 "name": "BaseBdev4", 00:09:54.691 "uuid": "1bbc994c-e683-4a19-975d-8aaadd0d6073", 00:09:54.691 "is_configured": true, 00:09:54.691 "data_offset": 0, 00:09:54.691 "data_size": 65536 00:09:54.691 } 00:09:54.691 ] 00:09:54.691 }' 00:09:54.691 20:05:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:54.691 20:05:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.950 20:05:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:54.950 20:05:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:54.950 20:05:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.950 20:05:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.950 20:05:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.950 20:05:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:54.950 20:05:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:54.950 20:05:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.950 20:05:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.950 20:05:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:54.950 20:05:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.950 20:05:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u ff6da2e9-8128-433c-a228-9b34eae85cad 00:09:54.950 20:05:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.950 20:05:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.210 [2024-12-08 20:05:26.953527] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:55.210 [2024-12-08 20:05:26.953667] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:55.210 [2024-12-08 20:05:26.953694] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:09:55.210 [2024-12-08 20:05:26.954031] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:09:55.210 [2024-12-08 20:05:26.954244] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:55.210 [2024-12-08 20:05:26.954290] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:09:55.210 [2024-12-08 20:05:26.954634] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:55.210 NewBaseBdev 00:09:55.210 20:05:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.210 20:05:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:55.210 20:05:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:09:55.210 20:05:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:55.210 20:05:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:55.210 20:05:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:55.210 20:05:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:55.210 20:05:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:55.210 20:05:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.210 20:05:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.210 20:05:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.210 20:05:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:55.211 20:05:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.211 20:05:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.211 [ 00:09:55.211 { 00:09:55.211 "name": "NewBaseBdev", 00:09:55.211 "aliases": [ 00:09:55.211 "ff6da2e9-8128-433c-a228-9b34eae85cad" 00:09:55.211 ], 00:09:55.211 "product_name": "Malloc disk", 00:09:55.211 "block_size": 512, 00:09:55.211 "num_blocks": 65536, 00:09:55.211 "uuid": "ff6da2e9-8128-433c-a228-9b34eae85cad", 00:09:55.211 "assigned_rate_limits": { 00:09:55.211 "rw_ios_per_sec": 0, 00:09:55.211 "rw_mbytes_per_sec": 0, 00:09:55.211 "r_mbytes_per_sec": 0, 00:09:55.211 "w_mbytes_per_sec": 0 00:09:55.211 }, 00:09:55.211 "claimed": true, 00:09:55.211 "claim_type": "exclusive_write", 00:09:55.211 "zoned": false, 00:09:55.211 "supported_io_types": { 00:09:55.211 "read": true, 00:09:55.211 "write": true, 00:09:55.211 "unmap": true, 00:09:55.211 "flush": true, 00:09:55.211 "reset": true, 00:09:55.211 "nvme_admin": false, 00:09:55.211 "nvme_io": false, 00:09:55.211 "nvme_io_md": false, 00:09:55.211 "write_zeroes": true, 00:09:55.211 "zcopy": true, 00:09:55.211 "get_zone_info": false, 00:09:55.211 "zone_management": false, 00:09:55.211 "zone_append": false, 00:09:55.211 "compare": false, 00:09:55.211 "compare_and_write": false, 00:09:55.211 "abort": true, 00:09:55.211 "seek_hole": false, 00:09:55.211 "seek_data": false, 00:09:55.211 "copy": true, 00:09:55.211 "nvme_iov_md": false 00:09:55.211 }, 00:09:55.211 "memory_domains": [ 00:09:55.211 { 00:09:55.211 "dma_device_id": "system", 00:09:55.211 "dma_device_type": 1 00:09:55.211 }, 00:09:55.211 { 00:09:55.211 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:55.211 "dma_device_type": 2 00:09:55.211 } 00:09:55.211 ], 00:09:55.211 "driver_specific": {} 00:09:55.211 } 00:09:55.211 ] 00:09:55.211 20:05:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.211 20:05:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:55.211 20:05:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:09:55.211 20:05:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:55.211 20:05:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:55.211 20:05:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:55.211 20:05:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:55.211 20:05:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:55.211 20:05:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:55.211 20:05:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:55.211 20:05:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:55.211 20:05:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:55.211 20:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:55.211 20:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:55.211 20:05:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.211 20:05:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.211 20:05:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.211 20:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:55.211 "name": "Existed_Raid", 00:09:55.211 "uuid": "c3e3f037-8cd1-4003-a231-0ef27d0232fb", 00:09:55.211 "strip_size_kb": 64, 00:09:55.211 "state": "online", 00:09:55.211 "raid_level": "raid0", 00:09:55.211 "superblock": false, 00:09:55.211 "num_base_bdevs": 4, 00:09:55.211 "num_base_bdevs_discovered": 4, 00:09:55.211 "num_base_bdevs_operational": 4, 00:09:55.211 "base_bdevs_list": [ 00:09:55.211 { 00:09:55.211 "name": "NewBaseBdev", 00:09:55.211 "uuid": "ff6da2e9-8128-433c-a228-9b34eae85cad", 00:09:55.211 "is_configured": true, 00:09:55.211 "data_offset": 0, 00:09:55.211 "data_size": 65536 00:09:55.211 }, 00:09:55.211 { 00:09:55.211 "name": "BaseBdev2", 00:09:55.211 "uuid": "7e2f93a8-441d-4591-8128-8b84fd6da572", 00:09:55.211 "is_configured": true, 00:09:55.211 "data_offset": 0, 00:09:55.211 "data_size": 65536 00:09:55.211 }, 00:09:55.211 { 00:09:55.211 "name": "BaseBdev3", 00:09:55.211 "uuid": "9a14fcf7-7aff-4f9f-9984-1467718bf8ca", 00:09:55.211 "is_configured": true, 00:09:55.211 "data_offset": 0, 00:09:55.211 "data_size": 65536 00:09:55.211 }, 00:09:55.211 { 00:09:55.211 "name": "BaseBdev4", 00:09:55.211 "uuid": "1bbc994c-e683-4a19-975d-8aaadd0d6073", 00:09:55.211 "is_configured": true, 00:09:55.211 "data_offset": 0, 00:09:55.211 "data_size": 65536 00:09:55.211 } 00:09:55.211 ] 00:09:55.211 }' 00:09:55.211 20:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:55.211 20:05:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.472 20:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:55.472 20:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:55.472 20:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:55.472 20:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:55.472 20:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:55.472 20:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:55.472 20:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:55.472 20:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:55.472 20:05:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.472 20:05:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.472 [2024-12-08 20:05:27.425214] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:55.472 20:05:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.733 20:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:55.733 "name": "Existed_Raid", 00:09:55.733 "aliases": [ 00:09:55.733 "c3e3f037-8cd1-4003-a231-0ef27d0232fb" 00:09:55.733 ], 00:09:55.733 "product_name": "Raid Volume", 00:09:55.733 "block_size": 512, 00:09:55.733 "num_blocks": 262144, 00:09:55.733 "uuid": "c3e3f037-8cd1-4003-a231-0ef27d0232fb", 00:09:55.733 "assigned_rate_limits": { 00:09:55.733 "rw_ios_per_sec": 0, 00:09:55.733 "rw_mbytes_per_sec": 0, 00:09:55.733 "r_mbytes_per_sec": 0, 00:09:55.733 "w_mbytes_per_sec": 0 00:09:55.733 }, 00:09:55.733 "claimed": false, 00:09:55.733 "zoned": false, 00:09:55.733 "supported_io_types": { 00:09:55.733 "read": true, 00:09:55.733 "write": true, 00:09:55.733 "unmap": true, 00:09:55.733 "flush": true, 00:09:55.733 "reset": true, 00:09:55.733 "nvme_admin": false, 00:09:55.733 "nvme_io": false, 00:09:55.733 "nvme_io_md": false, 00:09:55.733 "write_zeroes": true, 00:09:55.733 "zcopy": false, 00:09:55.733 "get_zone_info": false, 00:09:55.733 "zone_management": false, 00:09:55.733 "zone_append": false, 00:09:55.733 "compare": false, 00:09:55.733 "compare_and_write": false, 00:09:55.733 "abort": false, 00:09:55.733 "seek_hole": false, 00:09:55.733 "seek_data": false, 00:09:55.733 "copy": false, 00:09:55.733 "nvme_iov_md": false 00:09:55.733 }, 00:09:55.733 "memory_domains": [ 00:09:55.733 { 00:09:55.733 "dma_device_id": "system", 00:09:55.733 "dma_device_type": 1 00:09:55.733 }, 00:09:55.733 { 00:09:55.733 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:55.733 "dma_device_type": 2 00:09:55.733 }, 00:09:55.733 { 00:09:55.733 "dma_device_id": "system", 00:09:55.733 "dma_device_type": 1 00:09:55.733 }, 00:09:55.733 { 00:09:55.733 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:55.733 "dma_device_type": 2 00:09:55.733 }, 00:09:55.733 { 00:09:55.733 "dma_device_id": "system", 00:09:55.733 "dma_device_type": 1 00:09:55.733 }, 00:09:55.733 { 00:09:55.733 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:55.733 "dma_device_type": 2 00:09:55.733 }, 00:09:55.733 { 00:09:55.733 "dma_device_id": "system", 00:09:55.733 "dma_device_type": 1 00:09:55.733 }, 00:09:55.733 { 00:09:55.733 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:55.733 "dma_device_type": 2 00:09:55.733 } 00:09:55.733 ], 00:09:55.733 "driver_specific": { 00:09:55.733 "raid": { 00:09:55.733 "uuid": "c3e3f037-8cd1-4003-a231-0ef27d0232fb", 00:09:55.733 "strip_size_kb": 64, 00:09:55.733 "state": "online", 00:09:55.733 "raid_level": "raid0", 00:09:55.733 "superblock": false, 00:09:55.733 "num_base_bdevs": 4, 00:09:55.733 "num_base_bdevs_discovered": 4, 00:09:55.733 "num_base_bdevs_operational": 4, 00:09:55.733 "base_bdevs_list": [ 00:09:55.733 { 00:09:55.733 "name": "NewBaseBdev", 00:09:55.733 "uuid": "ff6da2e9-8128-433c-a228-9b34eae85cad", 00:09:55.733 "is_configured": true, 00:09:55.733 "data_offset": 0, 00:09:55.733 "data_size": 65536 00:09:55.733 }, 00:09:55.733 { 00:09:55.733 "name": "BaseBdev2", 00:09:55.733 "uuid": "7e2f93a8-441d-4591-8128-8b84fd6da572", 00:09:55.733 "is_configured": true, 00:09:55.733 "data_offset": 0, 00:09:55.733 "data_size": 65536 00:09:55.733 }, 00:09:55.733 { 00:09:55.733 "name": "BaseBdev3", 00:09:55.733 "uuid": "9a14fcf7-7aff-4f9f-9984-1467718bf8ca", 00:09:55.733 "is_configured": true, 00:09:55.734 "data_offset": 0, 00:09:55.734 "data_size": 65536 00:09:55.734 }, 00:09:55.734 { 00:09:55.734 "name": "BaseBdev4", 00:09:55.734 "uuid": "1bbc994c-e683-4a19-975d-8aaadd0d6073", 00:09:55.734 "is_configured": true, 00:09:55.734 "data_offset": 0, 00:09:55.734 "data_size": 65536 00:09:55.734 } 00:09:55.734 ] 00:09:55.734 } 00:09:55.734 } 00:09:55.734 }' 00:09:55.734 20:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:55.734 20:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:55.734 BaseBdev2 00:09:55.734 BaseBdev3 00:09:55.734 BaseBdev4' 00:09:55.734 20:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:55.734 20:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:55.734 20:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:55.734 20:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:55.734 20:05:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.734 20:05:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.734 20:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:55.734 20:05:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.734 20:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:55.734 20:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:55.734 20:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:55.734 20:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:55.734 20:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:55.734 20:05:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.734 20:05:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.734 20:05:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.734 20:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:55.734 20:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:55.734 20:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:55.734 20:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:55.734 20:05:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.734 20:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:55.734 20:05:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.734 20:05:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.734 20:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:55.734 20:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:55.734 20:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:55.734 20:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:09:55.734 20:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:55.734 20:05:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.734 20:05:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.734 20:05:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.995 20:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:55.995 20:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:55.995 20:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:55.995 20:05:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.995 20:05:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.995 [2024-12-08 20:05:27.732294] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:55.995 [2024-12-08 20:05:27.732324] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:55.995 [2024-12-08 20:05:27.732408] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:55.995 [2024-12-08 20:05:27.732476] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:55.995 [2024-12-08 20:05:27.732487] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:09:55.995 20:05:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.995 20:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 69191 00:09:55.995 20:05:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 69191 ']' 00:09:55.995 20:05:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 69191 00:09:55.995 20:05:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:09:55.995 20:05:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:55.995 20:05:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69191 00:09:55.995 killing process with pid 69191 00:09:55.995 20:05:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:55.995 20:05:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:55.995 20:05:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69191' 00:09:55.995 20:05:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 69191 00:09:55.995 [2024-12-08 20:05:27.780820] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:55.995 20:05:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 69191 00:09:56.255 [2024-12-08 20:05:28.167286] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:57.638 20:05:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:57.638 00:09:57.638 real 0m11.497s 00:09:57.638 user 0m18.294s 00:09:57.638 sys 0m2.064s 00:09:57.638 20:05:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:57.638 20:05:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.638 ************************************ 00:09:57.638 END TEST raid_state_function_test 00:09:57.638 ************************************ 00:09:57.638 20:05:29 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:09:57.638 20:05:29 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:57.638 20:05:29 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:57.638 20:05:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:57.638 ************************************ 00:09:57.638 START TEST raid_state_function_test_sb 00:09:57.638 ************************************ 00:09:57.638 20:05:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 4 true 00:09:57.638 20:05:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:09:57.638 20:05:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:09:57.638 20:05:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:57.638 20:05:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:57.638 20:05:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:57.638 20:05:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:57.638 20:05:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:57.638 20:05:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:57.638 20:05:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:57.638 20:05:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:57.638 20:05:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:57.638 20:05:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:57.638 20:05:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:57.638 20:05:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:57.638 20:05:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:57.638 20:05:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:09:57.638 20:05:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:57.638 20:05:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:57.638 20:05:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:09:57.638 20:05:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:57.638 20:05:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:57.638 20:05:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:57.638 20:05:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:57.638 20:05:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:57.638 20:05:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:09:57.638 20:05:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:57.638 20:05:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:57.638 20:05:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:57.638 20:05:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:57.638 20:05:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=69862 00:09:57.638 Process raid pid: 69862 00:09:57.638 20:05:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:57.638 20:05:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 69862' 00:09:57.638 20:05:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 69862 00:09:57.638 20:05:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 69862 ']' 00:09:57.638 20:05:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:57.638 20:05:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:57.638 20:05:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:57.638 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:57.638 20:05:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:57.638 20:05:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:57.638 [2024-12-08 20:05:29.441053] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:09:57.638 [2024-12-08 20:05:29.441244] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:57.899 [2024-12-08 20:05:29.617279] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:57.899 [2024-12-08 20:05:29.730295] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:58.160 [2024-12-08 20:05:29.929368] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:58.160 [2024-12-08 20:05:29.929410] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:58.421 20:05:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:58.421 20:05:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:09:58.421 20:05:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:58.421 20:05:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.421 20:05:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.421 [2024-12-08 20:05:30.276593] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:58.421 [2024-12-08 20:05:30.276711] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:58.421 [2024-12-08 20:05:30.276742] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:58.421 [2024-12-08 20:05:30.276753] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:58.421 [2024-12-08 20:05:30.276760] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:58.421 [2024-12-08 20:05:30.276769] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:58.421 [2024-12-08 20:05:30.276776] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:58.421 [2024-12-08 20:05:30.276784] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:58.421 20:05:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.421 20:05:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:58.421 20:05:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:58.421 20:05:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:58.421 20:05:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:58.421 20:05:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:58.421 20:05:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:58.421 20:05:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:58.421 20:05:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:58.421 20:05:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:58.421 20:05:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:58.421 20:05:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.421 20:05:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:58.421 20:05:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.421 20:05:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.421 20:05:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.421 20:05:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:58.421 "name": "Existed_Raid", 00:09:58.421 "uuid": "f88dc304-6b55-4d22-a21b-8c099b868d6c", 00:09:58.421 "strip_size_kb": 64, 00:09:58.421 "state": "configuring", 00:09:58.421 "raid_level": "raid0", 00:09:58.421 "superblock": true, 00:09:58.421 "num_base_bdevs": 4, 00:09:58.421 "num_base_bdevs_discovered": 0, 00:09:58.421 "num_base_bdevs_operational": 4, 00:09:58.421 "base_bdevs_list": [ 00:09:58.421 { 00:09:58.421 "name": "BaseBdev1", 00:09:58.421 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.421 "is_configured": false, 00:09:58.421 "data_offset": 0, 00:09:58.421 "data_size": 0 00:09:58.421 }, 00:09:58.421 { 00:09:58.421 "name": "BaseBdev2", 00:09:58.421 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.421 "is_configured": false, 00:09:58.421 "data_offset": 0, 00:09:58.421 "data_size": 0 00:09:58.421 }, 00:09:58.421 { 00:09:58.421 "name": "BaseBdev3", 00:09:58.421 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.421 "is_configured": false, 00:09:58.421 "data_offset": 0, 00:09:58.421 "data_size": 0 00:09:58.421 }, 00:09:58.421 { 00:09:58.421 "name": "BaseBdev4", 00:09:58.421 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.421 "is_configured": false, 00:09:58.421 "data_offset": 0, 00:09:58.421 "data_size": 0 00:09:58.421 } 00:09:58.421 ] 00:09:58.421 }' 00:09:58.421 20:05:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:58.421 20:05:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.993 20:05:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:58.993 20:05:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.993 20:05:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.993 [2024-12-08 20:05:30.719809] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:58.993 [2024-12-08 20:05:30.719908] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:58.993 20:05:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.993 20:05:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:58.993 20:05:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.993 20:05:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.993 [2024-12-08 20:05:30.727786] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:58.993 [2024-12-08 20:05:30.727830] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:58.993 [2024-12-08 20:05:30.727840] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:58.993 [2024-12-08 20:05:30.727866] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:58.993 [2024-12-08 20:05:30.727874] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:58.993 [2024-12-08 20:05:30.727884] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:58.993 [2024-12-08 20:05:30.727891] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:58.993 [2024-12-08 20:05:30.727900] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:58.993 20:05:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.993 20:05:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:58.993 20:05:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.993 20:05:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.993 [2024-12-08 20:05:30.770790] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:58.993 BaseBdev1 00:09:58.993 20:05:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.993 20:05:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:58.993 20:05:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:58.993 20:05:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:58.993 20:05:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:58.993 20:05:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:58.993 20:05:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:58.993 20:05:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:58.993 20:05:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.993 20:05:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.993 20:05:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.993 20:05:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:58.993 20:05:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.993 20:05:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.993 [ 00:09:58.993 { 00:09:58.993 "name": "BaseBdev1", 00:09:58.993 "aliases": [ 00:09:58.993 "3beb92af-9ef7-4eb3-a585-451010e43c16" 00:09:58.993 ], 00:09:58.993 "product_name": "Malloc disk", 00:09:58.993 "block_size": 512, 00:09:58.993 "num_blocks": 65536, 00:09:58.993 "uuid": "3beb92af-9ef7-4eb3-a585-451010e43c16", 00:09:58.993 "assigned_rate_limits": { 00:09:58.993 "rw_ios_per_sec": 0, 00:09:58.993 "rw_mbytes_per_sec": 0, 00:09:58.993 "r_mbytes_per_sec": 0, 00:09:58.993 "w_mbytes_per_sec": 0 00:09:58.993 }, 00:09:58.993 "claimed": true, 00:09:58.993 "claim_type": "exclusive_write", 00:09:58.993 "zoned": false, 00:09:58.993 "supported_io_types": { 00:09:58.993 "read": true, 00:09:58.993 "write": true, 00:09:58.993 "unmap": true, 00:09:58.993 "flush": true, 00:09:58.993 "reset": true, 00:09:58.993 "nvme_admin": false, 00:09:58.993 "nvme_io": false, 00:09:58.993 "nvme_io_md": false, 00:09:58.993 "write_zeroes": true, 00:09:58.993 "zcopy": true, 00:09:58.993 "get_zone_info": false, 00:09:58.993 "zone_management": false, 00:09:58.993 "zone_append": false, 00:09:58.993 "compare": false, 00:09:58.993 "compare_and_write": false, 00:09:58.993 "abort": true, 00:09:58.993 "seek_hole": false, 00:09:58.993 "seek_data": false, 00:09:58.993 "copy": true, 00:09:58.993 "nvme_iov_md": false 00:09:58.993 }, 00:09:58.993 "memory_domains": [ 00:09:58.993 { 00:09:58.993 "dma_device_id": "system", 00:09:58.993 "dma_device_type": 1 00:09:58.993 }, 00:09:58.993 { 00:09:58.993 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:58.993 "dma_device_type": 2 00:09:58.993 } 00:09:58.993 ], 00:09:58.993 "driver_specific": {} 00:09:58.993 } 00:09:58.993 ] 00:09:58.993 20:05:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.993 20:05:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:58.993 20:05:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:58.993 20:05:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:58.993 20:05:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:58.993 20:05:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:58.993 20:05:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:58.993 20:05:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:58.993 20:05:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:58.993 20:05:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:58.993 20:05:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:58.993 20:05:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:58.993 20:05:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.993 20:05:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:58.993 20:05:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.993 20:05:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.993 20:05:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.993 20:05:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:58.993 "name": "Existed_Raid", 00:09:58.993 "uuid": "97200690-77fa-43f4-afdf-1326471d5b21", 00:09:58.993 "strip_size_kb": 64, 00:09:58.993 "state": "configuring", 00:09:58.993 "raid_level": "raid0", 00:09:58.993 "superblock": true, 00:09:58.993 "num_base_bdevs": 4, 00:09:58.993 "num_base_bdevs_discovered": 1, 00:09:58.993 "num_base_bdevs_operational": 4, 00:09:58.993 "base_bdevs_list": [ 00:09:58.993 { 00:09:58.993 "name": "BaseBdev1", 00:09:58.993 "uuid": "3beb92af-9ef7-4eb3-a585-451010e43c16", 00:09:58.993 "is_configured": true, 00:09:58.993 "data_offset": 2048, 00:09:58.993 "data_size": 63488 00:09:58.993 }, 00:09:58.993 { 00:09:58.993 "name": "BaseBdev2", 00:09:58.993 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.993 "is_configured": false, 00:09:58.993 "data_offset": 0, 00:09:58.993 "data_size": 0 00:09:58.993 }, 00:09:58.993 { 00:09:58.993 "name": "BaseBdev3", 00:09:58.993 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.993 "is_configured": false, 00:09:58.993 "data_offset": 0, 00:09:58.993 "data_size": 0 00:09:58.993 }, 00:09:58.993 { 00:09:58.993 "name": "BaseBdev4", 00:09:58.993 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.993 "is_configured": false, 00:09:58.993 "data_offset": 0, 00:09:58.993 "data_size": 0 00:09:58.993 } 00:09:58.993 ] 00:09:58.993 }' 00:09:58.993 20:05:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:58.994 20:05:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.254 20:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:59.254 20:05:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.254 20:05:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.254 [2024-12-08 20:05:31.206098] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:59.254 [2024-12-08 20:05:31.206237] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:59.254 20:05:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.254 20:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:59.254 20:05:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.254 20:05:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.254 [2024-12-08 20:05:31.214186] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:59.254 [2024-12-08 20:05:31.216178] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:59.254 [2024-12-08 20:05:31.216259] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:59.254 [2024-12-08 20:05:31.216289] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:59.254 [2024-12-08 20:05:31.216313] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:59.254 [2024-12-08 20:05:31.216333] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:59.254 [2024-12-08 20:05:31.216353] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:59.254 20:05:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.254 20:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:59.254 20:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:59.254 20:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:59.254 20:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:59.254 20:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:59.254 20:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:59.254 20:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:59.254 20:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:59.254 20:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:59.254 20:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:59.254 20:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:59.254 20:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:59.254 20:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.254 20:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:59.254 20:05:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.254 20:05:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.514 20:05:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.514 20:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:59.514 "name": "Existed_Raid", 00:09:59.514 "uuid": "d5c0b472-441e-465d-a6c9-0781bcabedd7", 00:09:59.514 "strip_size_kb": 64, 00:09:59.514 "state": "configuring", 00:09:59.514 "raid_level": "raid0", 00:09:59.514 "superblock": true, 00:09:59.514 "num_base_bdevs": 4, 00:09:59.514 "num_base_bdevs_discovered": 1, 00:09:59.514 "num_base_bdevs_operational": 4, 00:09:59.514 "base_bdevs_list": [ 00:09:59.514 { 00:09:59.514 "name": "BaseBdev1", 00:09:59.514 "uuid": "3beb92af-9ef7-4eb3-a585-451010e43c16", 00:09:59.514 "is_configured": true, 00:09:59.514 "data_offset": 2048, 00:09:59.514 "data_size": 63488 00:09:59.514 }, 00:09:59.514 { 00:09:59.514 "name": "BaseBdev2", 00:09:59.514 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:59.514 "is_configured": false, 00:09:59.514 "data_offset": 0, 00:09:59.514 "data_size": 0 00:09:59.514 }, 00:09:59.514 { 00:09:59.514 "name": "BaseBdev3", 00:09:59.514 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:59.514 "is_configured": false, 00:09:59.514 "data_offset": 0, 00:09:59.514 "data_size": 0 00:09:59.514 }, 00:09:59.514 { 00:09:59.514 "name": "BaseBdev4", 00:09:59.514 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:59.514 "is_configured": false, 00:09:59.514 "data_offset": 0, 00:09:59.514 "data_size": 0 00:09:59.514 } 00:09:59.514 ] 00:09:59.514 }' 00:09:59.514 20:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:59.514 20:05:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.775 20:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:59.775 20:05:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.775 20:05:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.775 [2024-12-08 20:05:31.690686] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:59.775 BaseBdev2 00:09:59.775 20:05:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.775 20:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:59.775 20:05:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:59.775 20:05:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:59.775 20:05:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:59.775 20:05:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:59.775 20:05:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:59.775 20:05:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:59.775 20:05:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.775 20:05:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.775 20:05:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.775 20:05:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:59.775 20:05:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.775 20:05:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.775 [ 00:09:59.775 { 00:09:59.775 "name": "BaseBdev2", 00:09:59.775 "aliases": [ 00:09:59.775 "5f13543c-2170-4558-930b-47dba5fef1b1" 00:09:59.775 ], 00:09:59.775 "product_name": "Malloc disk", 00:09:59.775 "block_size": 512, 00:09:59.775 "num_blocks": 65536, 00:09:59.775 "uuid": "5f13543c-2170-4558-930b-47dba5fef1b1", 00:09:59.775 "assigned_rate_limits": { 00:09:59.775 "rw_ios_per_sec": 0, 00:09:59.775 "rw_mbytes_per_sec": 0, 00:09:59.775 "r_mbytes_per_sec": 0, 00:09:59.775 "w_mbytes_per_sec": 0 00:09:59.775 }, 00:09:59.775 "claimed": true, 00:09:59.775 "claim_type": "exclusive_write", 00:09:59.775 "zoned": false, 00:09:59.775 "supported_io_types": { 00:09:59.775 "read": true, 00:09:59.775 "write": true, 00:09:59.775 "unmap": true, 00:09:59.775 "flush": true, 00:09:59.775 "reset": true, 00:09:59.775 "nvme_admin": false, 00:09:59.775 "nvme_io": false, 00:09:59.775 "nvme_io_md": false, 00:09:59.775 "write_zeroes": true, 00:09:59.775 "zcopy": true, 00:09:59.775 "get_zone_info": false, 00:09:59.775 "zone_management": false, 00:09:59.775 "zone_append": false, 00:09:59.775 "compare": false, 00:09:59.775 "compare_and_write": false, 00:09:59.775 "abort": true, 00:09:59.775 "seek_hole": false, 00:09:59.775 "seek_data": false, 00:09:59.775 "copy": true, 00:09:59.775 "nvme_iov_md": false 00:09:59.775 }, 00:09:59.775 "memory_domains": [ 00:09:59.775 { 00:09:59.775 "dma_device_id": "system", 00:09:59.775 "dma_device_type": 1 00:09:59.775 }, 00:09:59.775 { 00:09:59.775 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:59.775 "dma_device_type": 2 00:09:59.775 } 00:09:59.775 ], 00:09:59.775 "driver_specific": {} 00:09:59.775 } 00:09:59.775 ] 00:09:59.775 20:05:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.775 20:05:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:59.775 20:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:59.775 20:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:59.775 20:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:59.775 20:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:59.775 20:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:59.775 20:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:59.775 20:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:59.775 20:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:59.775 20:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:59.775 20:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:59.775 20:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:59.775 20:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:59.775 20:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:59.775 20:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.775 20:05:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.775 20:05:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:00.035 20:05:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.035 20:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:00.035 "name": "Existed_Raid", 00:10:00.035 "uuid": "d5c0b472-441e-465d-a6c9-0781bcabedd7", 00:10:00.035 "strip_size_kb": 64, 00:10:00.035 "state": "configuring", 00:10:00.035 "raid_level": "raid0", 00:10:00.035 "superblock": true, 00:10:00.035 "num_base_bdevs": 4, 00:10:00.035 "num_base_bdevs_discovered": 2, 00:10:00.035 "num_base_bdevs_operational": 4, 00:10:00.035 "base_bdevs_list": [ 00:10:00.035 { 00:10:00.035 "name": "BaseBdev1", 00:10:00.035 "uuid": "3beb92af-9ef7-4eb3-a585-451010e43c16", 00:10:00.035 "is_configured": true, 00:10:00.035 "data_offset": 2048, 00:10:00.035 "data_size": 63488 00:10:00.035 }, 00:10:00.035 { 00:10:00.035 "name": "BaseBdev2", 00:10:00.035 "uuid": "5f13543c-2170-4558-930b-47dba5fef1b1", 00:10:00.035 "is_configured": true, 00:10:00.035 "data_offset": 2048, 00:10:00.035 "data_size": 63488 00:10:00.035 }, 00:10:00.035 { 00:10:00.035 "name": "BaseBdev3", 00:10:00.035 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:00.035 "is_configured": false, 00:10:00.035 "data_offset": 0, 00:10:00.035 "data_size": 0 00:10:00.035 }, 00:10:00.035 { 00:10:00.035 "name": "BaseBdev4", 00:10:00.035 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:00.035 "is_configured": false, 00:10:00.035 "data_offset": 0, 00:10:00.035 "data_size": 0 00:10:00.035 } 00:10:00.035 ] 00:10:00.035 }' 00:10:00.035 20:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:00.035 20:05:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:00.294 20:05:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:00.294 20:05:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.294 20:05:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:00.294 [2024-12-08 20:05:32.187351] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:00.294 BaseBdev3 00:10:00.294 20:05:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.294 20:05:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:00.294 20:05:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:00.294 20:05:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:00.294 20:05:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:00.294 20:05:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:00.295 20:05:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:00.295 20:05:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:00.295 20:05:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.295 20:05:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:00.295 20:05:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.295 20:05:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:00.295 20:05:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.295 20:05:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:00.295 [ 00:10:00.295 { 00:10:00.295 "name": "BaseBdev3", 00:10:00.295 "aliases": [ 00:10:00.295 "3ea3c227-aca7-460e-8c08-c186138d9c0f" 00:10:00.295 ], 00:10:00.295 "product_name": "Malloc disk", 00:10:00.295 "block_size": 512, 00:10:00.295 "num_blocks": 65536, 00:10:00.295 "uuid": "3ea3c227-aca7-460e-8c08-c186138d9c0f", 00:10:00.295 "assigned_rate_limits": { 00:10:00.295 "rw_ios_per_sec": 0, 00:10:00.295 "rw_mbytes_per_sec": 0, 00:10:00.295 "r_mbytes_per_sec": 0, 00:10:00.295 "w_mbytes_per_sec": 0 00:10:00.295 }, 00:10:00.295 "claimed": true, 00:10:00.295 "claim_type": "exclusive_write", 00:10:00.295 "zoned": false, 00:10:00.295 "supported_io_types": { 00:10:00.295 "read": true, 00:10:00.295 "write": true, 00:10:00.295 "unmap": true, 00:10:00.295 "flush": true, 00:10:00.295 "reset": true, 00:10:00.295 "nvme_admin": false, 00:10:00.295 "nvme_io": false, 00:10:00.295 "nvme_io_md": false, 00:10:00.295 "write_zeroes": true, 00:10:00.295 "zcopy": true, 00:10:00.295 "get_zone_info": false, 00:10:00.295 "zone_management": false, 00:10:00.295 "zone_append": false, 00:10:00.295 "compare": false, 00:10:00.295 "compare_and_write": false, 00:10:00.295 "abort": true, 00:10:00.295 "seek_hole": false, 00:10:00.295 "seek_data": false, 00:10:00.295 "copy": true, 00:10:00.295 "nvme_iov_md": false 00:10:00.295 }, 00:10:00.295 "memory_domains": [ 00:10:00.295 { 00:10:00.295 "dma_device_id": "system", 00:10:00.295 "dma_device_type": 1 00:10:00.295 }, 00:10:00.295 { 00:10:00.295 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:00.295 "dma_device_type": 2 00:10:00.295 } 00:10:00.295 ], 00:10:00.295 "driver_specific": {} 00:10:00.295 } 00:10:00.295 ] 00:10:00.295 20:05:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.295 20:05:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:00.295 20:05:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:00.295 20:05:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:00.295 20:05:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:00.295 20:05:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:00.295 20:05:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:00.295 20:05:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:00.295 20:05:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:00.295 20:05:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:00.295 20:05:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:00.295 20:05:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:00.295 20:05:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:00.295 20:05:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:00.295 20:05:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:00.295 20:05:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:00.295 20:05:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.295 20:05:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:00.295 20:05:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.555 20:05:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:00.555 "name": "Existed_Raid", 00:10:00.555 "uuid": "d5c0b472-441e-465d-a6c9-0781bcabedd7", 00:10:00.555 "strip_size_kb": 64, 00:10:00.555 "state": "configuring", 00:10:00.555 "raid_level": "raid0", 00:10:00.555 "superblock": true, 00:10:00.555 "num_base_bdevs": 4, 00:10:00.555 "num_base_bdevs_discovered": 3, 00:10:00.555 "num_base_bdevs_operational": 4, 00:10:00.555 "base_bdevs_list": [ 00:10:00.555 { 00:10:00.555 "name": "BaseBdev1", 00:10:00.555 "uuid": "3beb92af-9ef7-4eb3-a585-451010e43c16", 00:10:00.555 "is_configured": true, 00:10:00.555 "data_offset": 2048, 00:10:00.555 "data_size": 63488 00:10:00.555 }, 00:10:00.555 { 00:10:00.555 "name": "BaseBdev2", 00:10:00.555 "uuid": "5f13543c-2170-4558-930b-47dba5fef1b1", 00:10:00.555 "is_configured": true, 00:10:00.555 "data_offset": 2048, 00:10:00.555 "data_size": 63488 00:10:00.555 }, 00:10:00.555 { 00:10:00.555 "name": "BaseBdev3", 00:10:00.555 "uuid": "3ea3c227-aca7-460e-8c08-c186138d9c0f", 00:10:00.555 "is_configured": true, 00:10:00.555 "data_offset": 2048, 00:10:00.555 "data_size": 63488 00:10:00.555 }, 00:10:00.555 { 00:10:00.555 "name": "BaseBdev4", 00:10:00.555 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:00.555 "is_configured": false, 00:10:00.555 "data_offset": 0, 00:10:00.555 "data_size": 0 00:10:00.555 } 00:10:00.555 ] 00:10:00.555 }' 00:10:00.555 20:05:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:00.555 20:05:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:00.815 20:05:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:00.815 20:05:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.815 20:05:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:00.815 [2024-12-08 20:05:32.701974] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:00.815 [2024-12-08 20:05:32.702240] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:00.815 [2024-12-08 20:05:32.702255] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:00.815 [2024-12-08 20:05:32.702511] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:00.815 [2024-12-08 20:05:32.702668] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:00.815 [2024-12-08 20:05:32.702680] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:00.815 [2024-12-08 20:05:32.702828] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:00.815 BaseBdev4 00:10:00.815 20:05:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.815 20:05:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:00.815 20:05:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:00.815 20:05:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:00.815 20:05:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:00.815 20:05:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:00.815 20:05:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:00.815 20:05:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:00.815 20:05:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.815 20:05:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:00.815 20:05:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.815 20:05:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:00.815 20:05:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.815 20:05:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:00.815 [ 00:10:00.815 { 00:10:00.815 "name": "BaseBdev4", 00:10:00.815 "aliases": [ 00:10:00.815 "c5173c03-7ae3-4a94-b030-f4079300e672" 00:10:00.815 ], 00:10:00.815 "product_name": "Malloc disk", 00:10:00.815 "block_size": 512, 00:10:00.815 "num_blocks": 65536, 00:10:00.815 "uuid": "c5173c03-7ae3-4a94-b030-f4079300e672", 00:10:00.815 "assigned_rate_limits": { 00:10:00.815 "rw_ios_per_sec": 0, 00:10:00.815 "rw_mbytes_per_sec": 0, 00:10:00.815 "r_mbytes_per_sec": 0, 00:10:00.815 "w_mbytes_per_sec": 0 00:10:00.815 }, 00:10:00.815 "claimed": true, 00:10:00.815 "claim_type": "exclusive_write", 00:10:00.815 "zoned": false, 00:10:00.815 "supported_io_types": { 00:10:00.815 "read": true, 00:10:00.815 "write": true, 00:10:00.815 "unmap": true, 00:10:00.815 "flush": true, 00:10:00.815 "reset": true, 00:10:00.815 "nvme_admin": false, 00:10:00.815 "nvme_io": false, 00:10:00.815 "nvme_io_md": false, 00:10:00.815 "write_zeroes": true, 00:10:00.815 "zcopy": true, 00:10:00.815 "get_zone_info": false, 00:10:00.815 "zone_management": false, 00:10:00.815 "zone_append": false, 00:10:00.815 "compare": false, 00:10:00.815 "compare_and_write": false, 00:10:00.815 "abort": true, 00:10:00.815 "seek_hole": false, 00:10:00.815 "seek_data": false, 00:10:00.815 "copy": true, 00:10:00.815 "nvme_iov_md": false 00:10:00.815 }, 00:10:00.815 "memory_domains": [ 00:10:00.815 { 00:10:00.815 "dma_device_id": "system", 00:10:00.815 "dma_device_type": 1 00:10:00.815 }, 00:10:00.815 { 00:10:00.815 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:00.815 "dma_device_type": 2 00:10:00.815 } 00:10:00.815 ], 00:10:00.815 "driver_specific": {} 00:10:00.815 } 00:10:00.815 ] 00:10:00.815 20:05:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.815 20:05:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:00.815 20:05:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:00.815 20:05:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:00.815 20:05:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:10:00.815 20:05:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:00.815 20:05:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:00.815 20:05:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:00.815 20:05:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:00.815 20:05:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:00.815 20:05:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:00.815 20:05:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:00.815 20:05:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:00.815 20:05:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:00.815 20:05:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:00.815 20:05:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:00.815 20:05:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.815 20:05:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:00.815 20:05:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.074 20:05:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:01.074 "name": "Existed_Raid", 00:10:01.074 "uuid": "d5c0b472-441e-465d-a6c9-0781bcabedd7", 00:10:01.074 "strip_size_kb": 64, 00:10:01.074 "state": "online", 00:10:01.075 "raid_level": "raid0", 00:10:01.075 "superblock": true, 00:10:01.075 "num_base_bdevs": 4, 00:10:01.075 "num_base_bdevs_discovered": 4, 00:10:01.075 "num_base_bdevs_operational": 4, 00:10:01.075 "base_bdevs_list": [ 00:10:01.075 { 00:10:01.075 "name": "BaseBdev1", 00:10:01.075 "uuid": "3beb92af-9ef7-4eb3-a585-451010e43c16", 00:10:01.075 "is_configured": true, 00:10:01.075 "data_offset": 2048, 00:10:01.075 "data_size": 63488 00:10:01.075 }, 00:10:01.075 { 00:10:01.075 "name": "BaseBdev2", 00:10:01.075 "uuid": "5f13543c-2170-4558-930b-47dba5fef1b1", 00:10:01.075 "is_configured": true, 00:10:01.075 "data_offset": 2048, 00:10:01.075 "data_size": 63488 00:10:01.075 }, 00:10:01.075 { 00:10:01.075 "name": "BaseBdev3", 00:10:01.075 "uuid": "3ea3c227-aca7-460e-8c08-c186138d9c0f", 00:10:01.075 "is_configured": true, 00:10:01.075 "data_offset": 2048, 00:10:01.075 "data_size": 63488 00:10:01.075 }, 00:10:01.075 { 00:10:01.075 "name": "BaseBdev4", 00:10:01.075 "uuid": "c5173c03-7ae3-4a94-b030-f4079300e672", 00:10:01.075 "is_configured": true, 00:10:01.075 "data_offset": 2048, 00:10:01.075 "data_size": 63488 00:10:01.075 } 00:10:01.075 ] 00:10:01.075 }' 00:10:01.075 20:05:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:01.075 20:05:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.334 20:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:01.334 20:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:01.334 20:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:01.334 20:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:01.334 20:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:01.334 20:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:01.334 20:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:01.334 20:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:01.334 20:05:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.334 20:05:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.334 [2024-12-08 20:05:33.205472] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:01.334 20:05:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.334 20:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:01.334 "name": "Existed_Raid", 00:10:01.334 "aliases": [ 00:10:01.334 "d5c0b472-441e-465d-a6c9-0781bcabedd7" 00:10:01.334 ], 00:10:01.334 "product_name": "Raid Volume", 00:10:01.334 "block_size": 512, 00:10:01.334 "num_blocks": 253952, 00:10:01.334 "uuid": "d5c0b472-441e-465d-a6c9-0781bcabedd7", 00:10:01.334 "assigned_rate_limits": { 00:10:01.334 "rw_ios_per_sec": 0, 00:10:01.334 "rw_mbytes_per_sec": 0, 00:10:01.334 "r_mbytes_per_sec": 0, 00:10:01.334 "w_mbytes_per_sec": 0 00:10:01.334 }, 00:10:01.334 "claimed": false, 00:10:01.334 "zoned": false, 00:10:01.334 "supported_io_types": { 00:10:01.334 "read": true, 00:10:01.334 "write": true, 00:10:01.334 "unmap": true, 00:10:01.334 "flush": true, 00:10:01.334 "reset": true, 00:10:01.334 "nvme_admin": false, 00:10:01.334 "nvme_io": false, 00:10:01.334 "nvme_io_md": false, 00:10:01.334 "write_zeroes": true, 00:10:01.334 "zcopy": false, 00:10:01.334 "get_zone_info": false, 00:10:01.334 "zone_management": false, 00:10:01.334 "zone_append": false, 00:10:01.334 "compare": false, 00:10:01.334 "compare_and_write": false, 00:10:01.334 "abort": false, 00:10:01.334 "seek_hole": false, 00:10:01.334 "seek_data": false, 00:10:01.334 "copy": false, 00:10:01.334 "nvme_iov_md": false 00:10:01.334 }, 00:10:01.334 "memory_domains": [ 00:10:01.334 { 00:10:01.334 "dma_device_id": "system", 00:10:01.334 "dma_device_type": 1 00:10:01.334 }, 00:10:01.334 { 00:10:01.334 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:01.334 "dma_device_type": 2 00:10:01.334 }, 00:10:01.334 { 00:10:01.334 "dma_device_id": "system", 00:10:01.334 "dma_device_type": 1 00:10:01.334 }, 00:10:01.334 { 00:10:01.334 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:01.334 "dma_device_type": 2 00:10:01.334 }, 00:10:01.334 { 00:10:01.334 "dma_device_id": "system", 00:10:01.334 "dma_device_type": 1 00:10:01.334 }, 00:10:01.334 { 00:10:01.334 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:01.334 "dma_device_type": 2 00:10:01.334 }, 00:10:01.334 { 00:10:01.334 "dma_device_id": "system", 00:10:01.334 "dma_device_type": 1 00:10:01.334 }, 00:10:01.334 { 00:10:01.334 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:01.334 "dma_device_type": 2 00:10:01.334 } 00:10:01.334 ], 00:10:01.334 "driver_specific": { 00:10:01.334 "raid": { 00:10:01.334 "uuid": "d5c0b472-441e-465d-a6c9-0781bcabedd7", 00:10:01.334 "strip_size_kb": 64, 00:10:01.334 "state": "online", 00:10:01.334 "raid_level": "raid0", 00:10:01.334 "superblock": true, 00:10:01.334 "num_base_bdevs": 4, 00:10:01.334 "num_base_bdevs_discovered": 4, 00:10:01.334 "num_base_bdevs_operational": 4, 00:10:01.334 "base_bdevs_list": [ 00:10:01.334 { 00:10:01.334 "name": "BaseBdev1", 00:10:01.334 "uuid": "3beb92af-9ef7-4eb3-a585-451010e43c16", 00:10:01.334 "is_configured": true, 00:10:01.334 "data_offset": 2048, 00:10:01.334 "data_size": 63488 00:10:01.334 }, 00:10:01.334 { 00:10:01.334 "name": "BaseBdev2", 00:10:01.334 "uuid": "5f13543c-2170-4558-930b-47dba5fef1b1", 00:10:01.334 "is_configured": true, 00:10:01.334 "data_offset": 2048, 00:10:01.334 "data_size": 63488 00:10:01.334 }, 00:10:01.334 { 00:10:01.334 "name": "BaseBdev3", 00:10:01.334 "uuid": "3ea3c227-aca7-460e-8c08-c186138d9c0f", 00:10:01.334 "is_configured": true, 00:10:01.334 "data_offset": 2048, 00:10:01.334 "data_size": 63488 00:10:01.334 }, 00:10:01.334 { 00:10:01.334 "name": "BaseBdev4", 00:10:01.335 "uuid": "c5173c03-7ae3-4a94-b030-f4079300e672", 00:10:01.335 "is_configured": true, 00:10:01.335 "data_offset": 2048, 00:10:01.335 "data_size": 63488 00:10:01.335 } 00:10:01.335 ] 00:10:01.335 } 00:10:01.335 } 00:10:01.335 }' 00:10:01.335 20:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:01.335 20:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:01.335 BaseBdev2 00:10:01.335 BaseBdev3 00:10:01.335 BaseBdev4' 00:10:01.335 20:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:01.335 20:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:01.335 20:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:01.335 20:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:01.335 20:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:01.335 20:05:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.335 20:05:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.595 20:05:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.595 20:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:01.595 20:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:01.595 20:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:01.595 20:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:01.595 20:05:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.595 20:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:01.595 20:05:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.595 20:05:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.596 20:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:01.596 20:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:01.596 20:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:01.596 20:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:01.596 20:05:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.596 20:05:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.596 20:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:01.596 20:05:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.596 20:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:01.596 20:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:01.596 20:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:01.596 20:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:01.596 20:05:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.596 20:05:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.596 20:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:01.596 20:05:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.596 20:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:01.596 20:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:01.596 20:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:01.596 20:05:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.596 20:05:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.596 [2024-12-08 20:05:33.460755] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:01.596 [2024-12-08 20:05:33.460784] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:01.596 [2024-12-08 20:05:33.460834] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:01.596 20:05:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.596 20:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:01.596 20:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:10:01.596 20:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:01.596 20:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:10:01.596 20:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:01.596 20:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:10:01.596 20:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:01.596 20:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:01.596 20:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:01.596 20:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:01.596 20:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:01.596 20:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:01.596 20:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:01.596 20:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:01.596 20:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:01.596 20:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.596 20:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:01.596 20:05:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.596 20:05:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.857 20:05:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.857 20:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:01.857 "name": "Existed_Raid", 00:10:01.857 "uuid": "d5c0b472-441e-465d-a6c9-0781bcabedd7", 00:10:01.857 "strip_size_kb": 64, 00:10:01.857 "state": "offline", 00:10:01.857 "raid_level": "raid0", 00:10:01.857 "superblock": true, 00:10:01.857 "num_base_bdevs": 4, 00:10:01.857 "num_base_bdevs_discovered": 3, 00:10:01.857 "num_base_bdevs_operational": 3, 00:10:01.857 "base_bdevs_list": [ 00:10:01.857 { 00:10:01.857 "name": null, 00:10:01.857 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:01.857 "is_configured": false, 00:10:01.857 "data_offset": 0, 00:10:01.857 "data_size": 63488 00:10:01.857 }, 00:10:01.857 { 00:10:01.857 "name": "BaseBdev2", 00:10:01.857 "uuid": "5f13543c-2170-4558-930b-47dba5fef1b1", 00:10:01.857 "is_configured": true, 00:10:01.857 "data_offset": 2048, 00:10:01.857 "data_size": 63488 00:10:01.857 }, 00:10:01.857 { 00:10:01.857 "name": "BaseBdev3", 00:10:01.857 "uuid": "3ea3c227-aca7-460e-8c08-c186138d9c0f", 00:10:01.857 "is_configured": true, 00:10:01.857 "data_offset": 2048, 00:10:01.857 "data_size": 63488 00:10:01.857 }, 00:10:01.857 { 00:10:01.857 "name": "BaseBdev4", 00:10:01.857 "uuid": "c5173c03-7ae3-4a94-b030-f4079300e672", 00:10:01.857 "is_configured": true, 00:10:01.857 "data_offset": 2048, 00:10:01.857 "data_size": 63488 00:10:01.857 } 00:10:01.857 ] 00:10:01.857 }' 00:10:01.857 20:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:01.857 20:05:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.117 20:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:02.117 20:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:02.117 20:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:02.117 20:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.117 20:05:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.117 20:05:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.117 20:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.117 20:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:02.117 20:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:02.117 20:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:02.117 20:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.117 20:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.117 [2024-12-08 20:05:34.025800] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:02.377 20:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.377 20:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:02.377 20:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:02.377 20:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.377 20:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.377 20:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:02.377 20:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.377 20:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.377 20:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:02.377 20:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:02.377 20:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:02.377 20:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.377 20:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.377 [2024-12-08 20:05:34.180110] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:02.377 20:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.377 20:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:02.377 20:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:02.377 20:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.377 20:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.377 20:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.377 20:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:02.377 20:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.377 20:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:02.377 20:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:02.377 20:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:02.377 20:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.377 20:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.377 [2024-12-08 20:05:34.324649] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:02.377 [2024-12-08 20:05:34.324744] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:02.636 20:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.636 20:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:02.636 20:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:02.636 20:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.636 20:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:02.636 20:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.636 20:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.636 20:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.636 20:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:02.636 20:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:02.636 20:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:02.636 20:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:02.636 20:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:02.636 20:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:02.636 20:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.636 20:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.636 BaseBdev2 00:10:02.636 20:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.636 20:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:02.636 20:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:02.636 20:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:02.636 20:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:02.636 20:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:02.636 20:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:02.636 20:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:02.636 20:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.636 20:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.636 20:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.636 20:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:02.636 20:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.636 20:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.636 [ 00:10:02.636 { 00:10:02.636 "name": "BaseBdev2", 00:10:02.636 "aliases": [ 00:10:02.636 "d94d396c-288b-44e7-9db4-f8e42f3a312e" 00:10:02.636 ], 00:10:02.636 "product_name": "Malloc disk", 00:10:02.636 "block_size": 512, 00:10:02.636 "num_blocks": 65536, 00:10:02.636 "uuid": "d94d396c-288b-44e7-9db4-f8e42f3a312e", 00:10:02.636 "assigned_rate_limits": { 00:10:02.636 "rw_ios_per_sec": 0, 00:10:02.636 "rw_mbytes_per_sec": 0, 00:10:02.636 "r_mbytes_per_sec": 0, 00:10:02.636 "w_mbytes_per_sec": 0 00:10:02.636 }, 00:10:02.636 "claimed": false, 00:10:02.636 "zoned": false, 00:10:02.636 "supported_io_types": { 00:10:02.636 "read": true, 00:10:02.636 "write": true, 00:10:02.636 "unmap": true, 00:10:02.636 "flush": true, 00:10:02.636 "reset": true, 00:10:02.636 "nvme_admin": false, 00:10:02.636 "nvme_io": false, 00:10:02.636 "nvme_io_md": false, 00:10:02.636 "write_zeroes": true, 00:10:02.636 "zcopy": true, 00:10:02.636 "get_zone_info": false, 00:10:02.636 "zone_management": false, 00:10:02.636 "zone_append": false, 00:10:02.636 "compare": false, 00:10:02.636 "compare_and_write": false, 00:10:02.636 "abort": true, 00:10:02.636 "seek_hole": false, 00:10:02.636 "seek_data": false, 00:10:02.636 "copy": true, 00:10:02.636 "nvme_iov_md": false 00:10:02.636 }, 00:10:02.636 "memory_domains": [ 00:10:02.636 { 00:10:02.636 "dma_device_id": "system", 00:10:02.636 "dma_device_type": 1 00:10:02.636 }, 00:10:02.636 { 00:10:02.636 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:02.636 "dma_device_type": 2 00:10:02.636 } 00:10:02.636 ], 00:10:02.636 "driver_specific": {} 00:10:02.636 } 00:10:02.636 ] 00:10:02.636 20:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.636 20:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:02.636 20:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:02.636 20:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:02.636 20:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:02.636 20:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.636 20:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.636 BaseBdev3 00:10:02.636 20:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.636 20:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:02.636 20:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:02.636 20:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:02.636 20:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:02.636 20:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:02.636 20:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:02.636 20:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:02.636 20:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.636 20:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.636 20:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.636 20:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:02.636 20:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.636 20:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.895 [ 00:10:02.895 { 00:10:02.895 "name": "BaseBdev3", 00:10:02.895 "aliases": [ 00:10:02.895 "e775492f-685b-4822-9501-645e8317db93" 00:10:02.895 ], 00:10:02.895 "product_name": "Malloc disk", 00:10:02.895 "block_size": 512, 00:10:02.895 "num_blocks": 65536, 00:10:02.895 "uuid": "e775492f-685b-4822-9501-645e8317db93", 00:10:02.895 "assigned_rate_limits": { 00:10:02.895 "rw_ios_per_sec": 0, 00:10:02.895 "rw_mbytes_per_sec": 0, 00:10:02.895 "r_mbytes_per_sec": 0, 00:10:02.895 "w_mbytes_per_sec": 0 00:10:02.895 }, 00:10:02.895 "claimed": false, 00:10:02.895 "zoned": false, 00:10:02.895 "supported_io_types": { 00:10:02.895 "read": true, 00:10:02.895 "write": true, 00:10:02.895 "unmap": true, 00:10:02.895 "flush": true, 00:10:02.895 "reset": true, 00:10:02.895 "nvme_admin": false, 00:10:02.895 "nvme_io": false, 00:10:02.895 "nvme_io_md": false, 00:10:02.895 "write_zeroes": true, 00:10:02.895 "zcopy": true, 00:10:02.895 "get_zone_info": false, 00:10:02.895 "zone_management": false, 00:10:02.895 "zone_append": false, 00:10:02.895 "compare": false, 00:10:02.895 "compare_and_write": false, 00:10:02.895 "abort": true, 00:10:02.895 "seek_hole": false, 00:10:02.895 "seek_data": false, 00:10:02.895 "copy": true, 00:10:02.895 "nvme_iov_md": false 00:10:02.895 }, 00:10:02.895 "memory_domains": [ 00:10:02.895 { 00:10:02.895 "dma_device_id": "system", 00:10:02.895 "dma_device_type": 1 00:10:02.895 }, 00:10:02.895 { 00:10:02.895 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:02.895 "dma_device_type": 2 00:10:02.895 } 00:10:02.895 ], 00:10:02.895 "driver_specific": {} 00:10:02.895 } 00:10:02.895 ] 00:10:02.895 20:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.895 20:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:02.895 20:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:02.895 20:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:02.895 20:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:02.895 20:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.895 20:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.895 BaseBdev4 00:10:02.895 20:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.895 20:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:02.895 20:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:02.895 20:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:02.895 20:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:02.895 20:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:02.895 20:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:02.895 20:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:02.895 20:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.895 20:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.895 20:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.895 20:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:02.895 20:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.895 20:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.895 [ 00:10:02.895 { 00:10:02.895 "name": "BaseBdev4", 00:10:02.895 "aliases": [ 00:10:02.895 "a56784fa-aa5c-4ab3-a748-630db4cc9e9c" 00:10:02.896 ], 00:10:02.896 "product_name": "Malloc disk", 00:10:02.896 "block_size": 512, 00:10:02.896 "num_blocks": 65536, 00:10:02.896 "uuid": "a56784fa-aa5c-4ab3-a748-630db4cc9e9c", 00:10:02.896 "assigned_rate_limits": { 00:10:02.896 "rw_ios_per_sec": 0, 00:10:02.896 "rw_mbytes_per_sec": 0, 00:10:02.896 "r_mbytes_per_sec": 0, 00:10:02.896 "w_mbytes_per_sec": 0 00:10:02.896 }, 00:10:02.896 "claimed": false, 00:10:02.896 "zoned": false, 00:10:02.896 "supported_io_types": { 00:10:02.896 "read": true, 00:10:02.896 "write": true, 00:10:02.896 "unmap": true, 00:10:02.896 "flush": true, 00:10:02.896 "reset": true, 00:10:02.896 "nvme_admin": false, 00:10:02.896 "nvme_io": false, 00:10:02.896 "nvme_io_md": false, 00:10:02.896 "write_zeroes": true, 00:10:02.896 "zcopy": true, 00:10:02.896 "get_zone_info": false, 00:10:02.896 "zone_management": false, 00:10:02.896 "zone_append": false, 00:10:02.896 "compare": false, 00:10:02.896 "compare_and_write": false, 00:10:02.896 "abort": true, 00:10:02.896 "seek_hole": false, 00:10:02.896 "seek_data": false, 00:10:02.896 "copy": true, 00:10:02.896 "nvme_iov_md": false 00:10:02.896 }, 00:10:02.896 "memory_domains": [ 00:10:02.896 { 00:10:02.896 "dma_device_id": "system", 00:10:02.896 "dma_device_type": 1 00:10:02.896 }, 00:10:02.896 { 00:10:02.896 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:02.896 "dma_device_type": 2 00:10:02.896 } 00:10:02.896 ], 00:10:02.896 "driver_specific": {} 00:10:02.896 } 00:10:02.896 ] 00:10:02.896 20:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.896 20:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:02.896 20:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:02.896 20:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:02.896 20:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:02.896 20:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.896 20:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.896 [2024-12-08 20:05:34.719573] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:02.896 [2024-12-08 20:05:34.719660] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:02.896 [2024-12-08 20:05:34.719707] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:02.896 [2024-12-08 20:05:34.721537] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:02.896 [2024-12-08 20:05:34.721632] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:02.896 20:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.896 20:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:02.896 20:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:02.896 20:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:02.896 20:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:02.896 20:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:02.896 20:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:02.896 20:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:02.896 20:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:02.896 20:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:02.896 20:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:02.896 20:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.896 20:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.896 20:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.896 20:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:02.896 20:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.896 20:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:02.896 "name": "Existed_Raid", 00:10:02.896 "uuid": "36096302-34cb-4a57-8f77-53628e17df2a", 00:10:02.896 "strip_size_kb": 64, 00:10:02.896 "state": "configuring", 00:10:02.896 "raid_level": "raid0", 00:10:02.896 "superblock": true, 00:10:02.896 "num_base_bdevs": 4, 00:10:02.896 "num_base_bdevs_discovered": 3, 00:10:02.896 "num_base_bdevs_operational": 4, 00:10:02.896 "base_bdevs_list": [ 00:10:02.896 { 00:10:02.896 "name": "BaseBdev1", 00:10:02.896 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.896 "is_configured": false, 00:10:02.896 "data_offset": 0, 00:10:02.896 "data_size": 0 00:10:02.896 }, 00:10:02.896 { 00:10:02.896 "name": "BaseBdev2", 00:10:02.896 "uuid": "d94d396c-288b-44e7-9db4-f8e42f3a312e", 00:10:02.896 "is_configured": true, 00:10:02.896 "data_offset": 2048, 00:10:02.896 "data_size": 63488 00:10:02.896 }, 00:10:02.896 { 00:10:02.896 "name": "BaseBdev3", 00:10:02.896 "uuid": "e775492f-685b-4822-9501-645e8317db93", 00:10:02.896 "is_configured": true, 00:10:02.896 "data_offset": 2048, 00:10:02.896 "data_size": 63488 00:10:02.896 }, 00:10:02.896 { 00:10:02.896 "name": "BaseBdev4", 00:10:02.896 "uuid": "a56784fa-aa5c-4ab3-a748-630db4cc9e9c", 00:10:02.896 "is_configured": true, 00:10:02.896 "data_offset": 2048, 00:10:02.896 "data_size": 63488 00:10:02.896 } 00:10:02.896 ] 00:10:02.896 }' 00:10:02.896 20:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:02.896 20:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.464 20:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:03.465 20:05:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.465 20:05:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.465 [2024-12-08 20:05:35.166908] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:03.465 20:05:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.465 20:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:03.465 20:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:03.465 20:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:03.465 20:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:03.465 20:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:03.465 20:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:03.465 20:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:03.465 20:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:03.465 20:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:03.465 20:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:03.465 20:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.465 20:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:03.465 20:05:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.465 20:05:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.465 20:05:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.465 20:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:03.465 "name": "Existed_Raid", 00:10:03.465 "uuid": "36096302-34cb-4a57-8f77-53628e17df2a", 00:10:03.465 "strip_size_kb": 64, 00:10:03.465 "state": "configuring", 00:10:03.465 "raid_level": "raid0", 00:10:03.465 "superblock": true, 00:10:03.465 "num_base_bdevs": 4, 00:10:03.465 "num_base_bdevs_discovered": 2, 00:10:03.465 "num_base_bdevs_operational": 4, 00:10:03.465 "base_bdevs_list": [ 00:10:03.465 { 00:10:03.465 "name": "BaseBdev1", 00:10:03.465 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:03.465 "is_configured": false, 00:10:03.465 "data_offset": 0, 00:10:03.465 "data_size": 0 00:10:03.465 }, 00:10:03.465 { 00:10:03.465 "name": null, 00:10:03.465 "uuid": "d94d396c-288b-44e7-9db4-f8e42f3a312e", 00:10:03.465 "is_configured": false, 00:10:03.465 "data_offset": 0, 00:10:03.465 "data_size": 63488 00:10:03.465 }, 00:10:03.465 { 00:10:03.465 "name": "BaseBdev3", 00:10:03.465 "uuid": "e775492f-685b-4822-9501-645e8317db93", 00:10:03.465 "is_configured": true, 00:10:03.465 "data_offset": 2048, 00:10:03.465 "data_size": 63488 00:10:03.465 }, 00:10:03.465 { 00:10:03.465 "name": "BaseBdev4", 00:10:03.465 "uuid": "a56784fa-aa5c-4ab3-a748-630db4cc9e9c", 00:10:03.465 "is_configured": true, 00:10:03.465 "data_offset": 2048, 00:10:03.465 "data_size": 63488 00:10:03.465 } 00:10:03.465 ] 00:10:03.465 }' 00:10:03.465 20:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:03.465 20:05:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.759 20:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.759 20:05:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.759 20:05:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.759 20:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:03.759 20:05:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.037 20:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:04.037 20:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:04.037 20:05:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.037 20:05:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.037 [2024-12-08 20:05:35.752747] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:04.037 BaseBdev1 00:10:04.037 20:05:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.037 20:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:04.037 20:05:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:04.037 20:05:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:04.037 20:05:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:04.037 20:05:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:04.037 20:05:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:04.037 20:05:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:04.037 20:05:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.037 20:05:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.037 20:05:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.037 20:05:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:04.037 20:05:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.037 20:05:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.037 [ 00:10:04.037 { 00:10:04.037 "name": "BaseBdev1", 00:10:04.037 "aliases": [ 00:10:04.037 "4b51efed-5d97-484a-a781-b9948293beb5" 00:10:04.037 ], 00:10:04.037 "product_name": "Malloc disk", 00:10:04.037 "block_size": 512, 00:10:04.037 "num_blocks": 65536, 00:10:04.037 "uuid": "4b51efed-5d97-484a-a781-b9948293beb5", 00:10:04.037 "assigned_rate_limits": { 00:10:04.037 "rw_ios_per_sec": 0, 00:10:04.037 "rw_mbytes_per_sec": 0, 00:10:04.037 "r_mbytes_per_sec": 0, 00:10:04.037 "w_mbytes_per_sec": 0 00:10:04.037 }, 00:10:04.037 "claimed": true, 00:10:04.037 "claim_type": "exclusive_write", 00:10:04.037 "zoned": false, 00:10:04.037 "supported_io_types": { 00:10:04.037 "read": true, 00:10:04.037 "write": true, 00:10:04.037 "unmap": true, 00:10:04.037 "flush": true, 00:10:04.037 "reset": true, 00:10:04.037 "nvme_admin": false, 00:10:04.037 "nvme_io": false, 00:10:04.037 "nvme_io_md": false, 00:10:04.037 "write_zeroes": true, 00:10:04.037 "zcopy": true, 00:10:04.037 "get_zone_info": false, 00:10:04.037 "zone_management": false, 00:10:04.037 "zone_append": false, 00:10:04.037 "compare": false, 00:10:04.037 "compare_and_write": false, 00:10:04.037 "abort": true, 00:10:04.037 "seek_hole": false, 00:10:04.037 "seek_data": false, 00:10:04.037 "copy": true, 00:10:04.037 "nvme_iov_md": false 00:10:04.037 }, 00:10:04.037 "memory_domains": [ 00:10:04.037 { 00:10:04.037 "dma_device_id": "system", 00:10:04.037 "dma_device_type": 1 00:10:04.037 }, 00:10:04.037 { 00:10:04.037 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:04.037 "dma_device_type": 2 00:10:04.037 } 00:10:04.037 ], 00:10:04.037 "driver_specific": {} 00:10:04.037 } 00:10:04.037 ] 00:10:04.037 20:05:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.037 20:05:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:04.037 20:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:04.037 20:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:04.037 20:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:04.037 20:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:04.037 20:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:04.037 20:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:04.037 20:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:04.037 20:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:04.037 20:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:04.037 20:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:04.037 20:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:04.037 20:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:04.037 20:05:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.037 20:05:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.037 20:05:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.037 20:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:04.037 "name": "Existed_Raid", 00:10:04.037 "uuid": "36096302-34cb-4a57-8f77-53628e17df2a", 00:10:04.037 "strip_size_kb": 64, 00:10:04.037 "state": "configuring", 00:10:04.037 "raid_level": "raid0", 00:10:04.037 "superblock": true, 00:10:04.037 "num_base_bdevs": 4, 00:10:04.037 "num_base_bdevs_discovered": 3, 00:10:04.037 "num_base_bdevs_operational": 4, 00:10:04.037 "base_bdevs_list": [ 00:10:04.037 { 00:10:04.037 "name": "BaseBdev1", 00:10:04.037 "uuid": "4b51efed-5d97-484a-a781-b9948293beb5", 00:10:04.037 "is_configured": true, 00:10:04.037 "data_offset": 2048, 00:10:04.037 "data_size": 63488 00:10:04.037 }, 00:10:04.037 { 00:10:04.037 "name": null, 00:10:04.037 "uuid": "d94d396c-288b-44e7-9db4-f8e42f3a312e", 00:10:04.037 "is_configured": false, 00:10:04.037 "data_offset": 0, 00:10:04.037 "data_size": 63488 00:10:04.037 }, 00:10:04.037 { 00:10:04.037 "name": "BaseBdev3", 00:10:04.037 "uuid": "e775492f-685b-4822-9501-645e8317db93", 00:10:04.037 "is_configured": true, 00:10:04.037 "data_offset": 2048, 00:10:04.037 "data_size": 63488 00:10:04.037 }, 00:10:04.037 { 00:10:04.037 "name": "BaseBdev4", 00:10:04.037 "uuid": "a56784fa-aa5c-4ab3-a748-630db4cc9e9c", 00:10:04.037 "is_configured": true, 00:10:04.037 "data_offset": 2048, 00:10:04.037 "data_size": 63488 00:10:04.037 } 00:10:04.037 ] 00:10:04.037 }' 00:10:04.037 20:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:04.037 20:05:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.309 20:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:04.309 20:05:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.309 20:05:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.309 20:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:04.309 20:05:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.569 20:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:04.569 20:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:04.569 20:05:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.569 20:05:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.569 [2024-12-08 20:05:36.303909] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:04.569 20:05:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.569 20:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:04.569 20:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:04.569 20:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:04.569 20:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:04.569 20:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:04.569 20:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:04.569 20:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:04.569 20:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:04.569 20:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:04.569 20:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:04.569 20:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:04.569 20:05:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.569 20:05:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.569 20:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:04.569 20:05:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.569 20:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:04.569 "name": "Existed_Raid", 00:10:04.569 "uuid": "36096302-34cb-4a57-8f77-53628e17df2a", 00:10:04.569 "strip_size_kb": 64, 00:10:04.569 "state": "configuring", 00:10:04.569 "raid_level": "raid0", 00:10:04.569 "superblock": true, 00:10:04.569 "num_base_bdevs": 4, 00:10:04.569 "num_base_bdevs_discovered": 2, 00:10:04.569 "num_base_bdevs_operational": 4, 00:10:04.569 "base_bdevs_list": [ 00:10:04.569 { 00:10:04.569 "name": "BaseBdev1", 00:10:04.569 "uuid": "4b51efed-5d97-484a-a781-b9948293beb5", 00:10:04.569 "is_configured": true, 00:10:04.569 "data_offset": 2048, 00:10:04.569 "data_size": 63488 00:10:04.569 }, 00:10:04.569 { 00:10:04.569 "name": null, 00:10:04.569 "uuid": "d94d396c-288b-44e7-9db4-f8e42f3a312e", 00:10:04.569 "is_configured": false, 00:10:04.569 "data_offset": 0, 00:10:04.569 "data_size": 63488 00:10:04.569 }, 00:10:04.569 { 00:10:04.569 "name": null, 00:10:04.569 "uuid": "e775492f-685b-4822-9501-645e8317db93", 00:10:04.569 "is_configured": false, 00:10:04.569 "data_offset": 0, 00:10:04.569 "data_size": 63488 00:10:04.569 }, 00:10:04.569 { 00:10:04.569 "name": "BaseBdev4", 00:10:04.569 "uuid": "a56784fa-aa5c-4ab3-a748-630db4cc9e9c", 00:10:04.569 "is_configured": true, 00:10:04.569 "data_offset": 2048, 00:10:04.569 "data_size": 63488 00:10:04.569 } 00:10:04.569 ] 00:10:04.569 }' 00:10:04.569 20:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:04.569 20:05:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.828 20:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:04.828 20:05:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.828 20:05:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.828 20:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:04.828 20:05:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.088 20:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:05.088 20:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:05.088 20:05:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.088 20:05:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.088 [2024-12-08 20:05:36.823089] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:05.088 20:05:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.088 20:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:05.088 20:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:05.088 20:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:05.088 20:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:05.088 20:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:05.088 20:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:05.088 20:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:05.088 20:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:05.088 20:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:05.088 20:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:05.088 20:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.088 20:05:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.088 20:05:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.088 20:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:05.088 20:05:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.088 20:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:05.088 "name": "Existed_Raid", 00:10:05.088 "uuid": "36096302-34cb-4a57-8f77-53628e17df2a", 00:10:05.088 "strip_size_kb": 64, 00:10:05.088 "state": "configuring", 00:10:05.088 "raid_level": "raid0", 00:10:05.088 "superblock": true, 00:10:05.088 "num_base_bdevs": 4, 00:10:05.088 "num_base_bdevs_discovered": 3, 00:10:05.088 "num_base_bdevs_operational": 4, 00:10:05.088 "base_bdevs_list": [ 00:10:05.088 { 00:10:05.088 "name": "BaseBdev1", 00:10:05.088 "uuid": "4b51efed-5d97-484a-a781-b9948293beb5", 00:10:05.088 "is_configured": true, 00:10:05.088 "data_offset": 2048, 00:10:05.088 "data_size": 63488 00:10:05.088 }, 00:10:05.088 { 00:10:05.088 "name": null, 00:10:05.088 "uuid": "d94d396c-288b-44e7-9db4-f8e42f3a312e", 00:10:05.088 "is_configured": false, 00:10:05.088 "data_offset": 0, 00:10:05.088 "data_size": 63488 00:10:05.088 }, 00:10:05.088 { 00:10:05.088 "name": "BaseBdev3", 00:10:05.088 "uuid": "e775492f-685b-4822-9501-645e8317db93", 00:10:05.088 "is_configured": true, 00:10:05.088 "data_offset": 2048, 00:10:05.088 "data_size": 63488 00:10:05.088 }, 00:10:05.088 { 00:10:05.088 "name": "BaseBdev4", 00:10:05.088 "uuid": "a56784fa-aa5c-4ab3-a748-630db4cc9e9c", 00:10:05.088 "is_configured": true, 00:10:05.088 "data_offset": 2048, 00:10:05.088 "data_size": 63488 00:10:05.088 } 00:10:05.088 ] 00:10:05.088 }' 00:10:05.088 20:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:05.088 20:05:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.347 20:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:05.347 20:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.347 20:05:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.347 20:05:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.347 20:05:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.347 20:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:05.347 20:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:05.347 20:05:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.347 20:05:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.606 [2024-12-08 20:05:37.326238] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:05.606 20:05:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.606 20:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:05.606 20:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:05.606 20:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:05.606 20:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:05.606 20:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:05.606 20:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:05.606 20:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:05.606 20:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:05.607 20:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:05.607 20:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:05.607 20:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:05.607 20:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.607 20:05:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.607 20:05:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.607 20:05:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.607 20:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:05.607 "name": "Existed_Raid", 00:10:05.607 "uuid": "36096302-34cb-4a57-8f77-53628e17df2a", 00:10:05.607 "strip_size_kb": 64, 00:10:05.607 "state": "configuring", 00:10:05.607 "raid_level": "raid0", 00:10:05.607 "superblock": true, 00:10:05.607 "num_base_bdevs": 4, 00:10:05.607 "num_base_bdevs_discovered": 2, 00:10:05.607 "num_base_bdevs_operational": 4, 00:10:05.607 "base_bdevs_list": [ 00:10:05.607 { 00:10:05.607 "name": null, 00:10:05.607 "uuid": "4b51efed-5d97-484a-a781-b9948293beb5", 00:10:05.607 "is_configured": false, 00:10:05.607 "data_offset": 0, 00:10:05.607 "data_size": 63488 00:10:05.607 }, 00:10:05.607 { 00:10:05.607 "name": null, 00:10:05.607 "uuid": "d94d396c-288b-44e7-9db4-f8e42f3a312e", 00:10:05.607 "is_configured": false, 00:10:05.607 "data_offset": 0, 00:10:05.607 "data_size": 63488 00:10:05.607 }, 00:10:05.607 { 00:10:05.607 "name": "BaseBdev3", 00:10:05.607 "uuid": "e775492f-685b-4822-9501-645e8317db93", 00:10:05.607 "is_configured": true, 00:10:05.607 "data_offset": 2048, 00:10:05.607 "data_size": 63488 00:10:05.607 }, 00:10:05.607 { 00:10:05.607 "name": "BaseBdev4", 00:10:05.607 "uuid": "a56784fa-aa5c-4ab3-a748-630db4cc9e9c", 00:10:05.607 "is_configured": true, 00:10:05.607 "data_offset": 2048, 00:10:05.607 "data_size": 63488 00:10:05.607 } 00:10:05.607 ] 00:10:05.607 }' 00:10:05.607 20:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:05.607 20:05:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.886 20:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.886 20:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:05.886 20:05:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.886 20:05:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.145 20:05:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.145 20:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:06.145 20:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:06.145 20:05:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.145 20:05:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.145 [2024-12-08 20:05:37.904578] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:06.145 20:05:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.145 20:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:06.145 20:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:06.145 20:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:06.145 20:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:06.145 20:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:06.145 20:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:06.145 20:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:06.145 20:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:06.146 20:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:06.146 20:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:06.146 20:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.146 20:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:06.146 20:05:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.146 20:05:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.146 20:05:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.146 20:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:06.146 "name": "Existed_Raid", 00:10:06.146 "uuid": "36096302-34cb-4a57-8f77-53628e17df2a", 00:10:06.146 "strip_size_kb": 64, 00:10:06.146 "state": "configuring", 00:10:06.146 "raid_level": "raid0", 00:10:06.146 "superblock": true, 00:10:06.146 "num_base_bdevs": 4, 00:10:06.146 "num_base_bdevs_discovered": 3, 00:10:06.146 "num_base_bdevs_operational": 4, 00:10:06.146 "base_bdevs_list": [ 00:10:06.146 { 00:10:06.146 "name": null, 00:10:06.146 "uuid": "4b51efed-5d97-484a-a781-b9948293beb5", 00:10:06.146 "is_configured": false, 00:10:06.146 "data_offset": 0, 00:10:06.146 "data_size": 63488 00:10:06.146 }, 00:10:06.146 { 00:10:06.146 "name": "BaseBdev2", 00:10:06.146 "uuid": "d94d396c-288b-44e7-9db4-f8e42f3a312e", 00:10:06.146 "is_configured": true, 00:10:06.146 "data_offset": 2048, 00:10:06.146 "data_size": 63488 00:10:06.146 }, 00:10:06.146 { 00:10:06.146 "name": "BaseBdev3", 00:10:06.146 "uuid": "e775492f-685b-4822-9501-645e8317db93", 00:10:06.146 "is_configured": true, 00:10:06.146 "data_offset": 2048, 00:10:06.146 "data_size": 63488 00:10:06.146 }, 00:10:06.146 { 00:10:06.146 "name": "BaseBdev4", 00:10:06.146 "uuid": "a56784fa-aa5c-4ab3-a748-630db4cc9e9c", 00:10:06.146 "is_configured": true, 00:10:06.146 "data_offset": 2048, 00:10:06.146 "data_size": 63488 00:10:06.146 } 00:10:06.146 ] 00:10:06.146 }' 00:10:06.146 20:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:06.146 20:05:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.404 20:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.404 20:05:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.404 20:05:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.404 20:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:06.662 20:05:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.662 20:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:06.662 20:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.662 20:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:06.662 20:05:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.662 20:05:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.662 20:05:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.662 20:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 4b51efed-5d97-484a-a781-b9948293beb5 00:10:06.662 20:05:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.662 20:05:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.662 [2024-12-08 20:05:38.496644] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:06.662 [2024-12-08 20:05:38.497034] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:06.662 [2024-12-08 20:05:38.497086] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:06.662 [2024-12-08 20:05:38.497395] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:10:06.662 [2024-12-08 20:05:38.497601] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:06.662 [2024-12-08 20:05:38.497646] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:06.662 NewBaseBdev 00:10:06.662 [2024-12-08 20:05:38.497842] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:06.662 20:05:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.662 20:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:06.662 20:05:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:06.662 20:05:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:06.662 20:05:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:06.662 20:05:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:06.662 20:05:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:06.662 20:05:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:06.662 20:05:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.662 20:05:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.662 20:05:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.662 20:05:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:06.662 20:05:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.662 20:05:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.662 [ 00:10:06.662 { 00:10:06.662 "name": "NewBaseBdev", 00:10:06.662 "aliases": [ 00:10:06.662 "4b51efed-5d97-484a-a781-b9948293beb5" 00:10:06.662 ], 00:10:06.662 "product_name": "Malloc disk", 00:10:06.662 "block_size": 512, 00:10:06.662 "num_blocks": 65536, 00:10:06.663 "uuid": "4b51efed-5d97-484a-a781-b9948293beb5", 00:10:06.663 "assigned_rate_limits": { 00:10:06.663 "rw_ios_per_sec": 0, 00:10:06.663 "rw_mbytes_per_sec": 0, 00:10:06.663 "r_mbytes_per_sec": 0, 00:10:06.663 "w_mbytes_per_sec": 0 00:10:06.663 }, 00:10:06.663 "claimed": true, 00:10:06.663 "claim_type": "exclusive_write", 00:10:06.663 "zoned": false, 00:10:06.663 "supported_io_types": { 00:10:06.663 "read": true, 00:10:06.663 "write": true, 00:10:06.663 "unmap": true, 00:10:06.663 "flush": true, 00:10:06.663 "reset": true, 00:10:06.663 "nvme_admin": false, 00:10:06.663 "nvme_io": false, 00:10:06.663 "nvme_io_md": false, 00:10:06.663 "write_zeroes": true, 00:10:06.663 "zcopy": true, 00:10:06.663 "get_zone_info": false, 00:10:06.663 "zone_management": false, 00:10:06.663 "zone_append": false, 00:10:06.663 "compare": false, 00:10:06.663 "compare_and_write": false, 00:10:06.663 "abort": true, 00:10:06.663 "seek_hole": false, 00:10:06.663 "seek_data": false, 00:10:06.663 "copy": true, 00:10:06.663 "nvme_iov_md": false 00:10:06.663 }, 00:10:06.663 "memory_domains": [ 00:10:06.663 { 00:10:06.663 "dma_device_id": "system", 00:10:06.663 "dma_device_type": 1 00:10:06.663 }, 00:10:06.663 { 00:10:06.663 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:06.663 "dma_device_type": 2 00:10:06.663 } 00:10:06.663 ], 00:10:06.663 "driver_specific": {} 00:10:06.663 } 00:10:06.663 ] 00:10:06.663 20:05:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.663 20:05:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:06.663 20:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:10:06.663 20:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:06.663 20:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:06.663 20:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:06.663 20:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:06.663 20:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:06.663 20:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:06.663 20:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:06.663 20:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:06.663 20:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:06.663 20:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:06.663 20:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.663 20:05:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.663 20:05:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.663 20:05:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.663 20:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:06.663 "name": "Existed_Raid", 00:10:06.663 "uuid": "36096302-34cb-4a57-8f77-53628e17df2a", 00:10:06.663 "strip_size_kb": 64, 00:10:06.663 "state": "online", 00:10:06.663 "raid_level": "raid0", 00:10:06.663 "superblock": true, 00:10:06.663 "num_base_bdevs": 4, 00:10:06.663 "num_base_bdevs_discovered": 4, 00:10:06.663 "num_base_bdevs_operational": 4, 00:10:06.663 "base_bdevs_list": [ 00:10:06.663 { 00:10:06.663 "name": "NewBaseBdev", 00:10:06.663 "uuid": "4b51efed-5d97-484a-a781-b9948293beb5", 00:10:06.663 "is_configured": true, 00:10:06.663 "data_offset": 2048, 00:10:06.663 "data_size": 63488 00:10:06.663 }, 00:10:06.663 { 00:10:06.663 "name": "BaseBdev2", 00:10:06.663 "uuid": "d94d396c-288b-44e7-9db4-f8e42f3a312e", 00:10:06.663 "is_configured": true, 00:10:06.663 "data_offset": 2048, 00:10:06.663 "data_size": 63488 00:10:06.663 }, 00:10:06.663 { 00:10:06.663 "name": "BaseBdev3", 00:10:06.663 "uuid": "e775492f-685b-4822-9501-645e8317db93", 00:10:06.663 "is_configured": true, 00:10:06.663 "data_offset": 2048, 00:10:06.663 "data_size": 63488 00:10:06.663 }, 00:10:06.663 { 00:10:06.663 "name": "BaseBdev4", 00:10:06.663 "uuid": "a56784fa-aa5c-4ab3-a748-630db4cc9e9c", 00:10:06.663 "is_configured": true, 00:10:06.663 "data_offset": 2048, 00:10:06.663 "data_size": 63488 00:10:06.663 } 00:10:06.663 ] 00:10:06.663 }' 00:10:06.663 20:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:06.663 20:05:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.233 20:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:07.233 20:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:07.233 20:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:07.233 20:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:07.233 20:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:07.233 20:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:07.233 20:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:07.233 20:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:07.233 20:05:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.233 20:05:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.233 [2024-12-08 20:05:39.004240] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:07.233 20:05:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.233 20:05:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:07.233 "name": "Existed_Raid", 00:10:07.233 "aliases": [ 00:10:07.233 "36096302-34cb-4a57-8f77-53628e17df2a" 00:10:07.233 ], 00:10:07.233 "product_name": "Raid Volume", 00:10:07.233 "block_size": 512, 00:10:07.233 "num_blocks": 253952, 00:10:07.233 "uuid": "36096302-34cb-4a57-8f77-53628e17df2a", 00:10:07.233 "assigned_rate_limits": { 00:10:07.233 "rw_ios_per_sec": 0, 00:10:07.233 "rw_mbytes_per_sec": 0, 00:10:07.233 "r_mbytes_per_sec": 0, 00:10:07.233 "w_mbytes_per_sec": 0 00:10:07.233 }, 00:10:07.233 "claimed": false, 00:10:07.233 "zoned": false, 00:10:07.233 "supported_io_types": { 00:10:07.233 "read": true, 00:10:07.233 "write": true, 00:10:07.233 "unmap": true, 00:10:07.233 "flush": true, 00:10:07.233 "reset": true, 00:10:07.233 "nvme_admin": false, 00:10:07.233 "nvme_io": false, 00:10:07.233 "nvme_io_md": false, 00:10:07.233 "write_zeroes": true, 00:10:07.233 "zcopy": false, 00:10:07.233 "get_zone_info": false, 00:10:07.233 "zone_management": false, 00:10:07.233 "zone_append": false, 00:10:07.233 "compare": false, 00:10:07.233 "compare_and_write": false, 00:10:07.233 "abort": false, 00:10:07.233 "seek_hole": false, 00:10:07.233 "seek_data": false, 00:10:07.233 "copy": false, 00:10:07.233 "nvme_iov_md": false 00:10:07.233 }, 00:10:07.233 "memory_domains": [ 00:10:07.233 { 00:10:07.233 "dma_device_id": "system", 00:10:07.233 "dma_device_type": 1 00:10:07.233 }, 00:10:07.233 { 00:10:07.233 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:07.233 "dma_device_type": 2 00:10:07.233 }, 00:10:07.233 { 00:10:07.233 "dma_device_id": "system", 00:10:07.233 "dma_device_type": 1 00:10:07.233 }, 00:10:07.233 { 00:10:07.233 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:07.233 "dma_device_type": 2 00:10:07.233 }, 00:10:07.233 { 00:10:07.233 "dma_device_id": "system", 00:10:07.233 "dma_device_type": 1 00:10:07.233 }, 00:10:07.233 { 00:10:07.233 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:07.233 "dma_device_type": 2 00:10:07.233 }, 00:10:07.233 { 00:10:07.233 "dma_device_id": "system", 00:10:07.233 "dma_device_type": 1 00:10:07.233 }, 00:10:07.233 { 00:10:07.233 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:07.233 "dma_device_type": 2 00:10:07.233 } 00:10:07.233 ], 00:10:07.233 "driver_specific": { 00:10:07.233 "raid": { 00:10:07.233 "uuid": "36096302-34cb-4a57-8f77-53628e17df2a", 00:10:07.233 "strip_size_kb": 64, 00:10:07.233 "state": "online", 00:10:07.233 "raid_level": "raid0", 00:10:07.233 "superblock": true, 00:10:07.233 "num_base_bdevs": 4, 00:10:07.233 "num_base_bdevs_discovered": 4, 00:10:07.233 "num_base_bdevs_operational": 4, 00:10:07.233 "base_bdevs_list": [ 00:10:07.233 { 00:10:07.233 "name": "NewBaseBdev", 00:10:07.233 "uuid": "4b51efed-5d97-484a-a781-b9948293beb5", 00:10:07.233 "is_configured": true, 00:10:07.233 "data_offset": 2048, 00:10:07.233 "data_size": 63488 00:10:07.233 }, 00:10:07.233 { 00:10:07.233 "name": "BaseBdev2", 00:10:07.233 "uuid": "d94d396c-288b-44e7-9db4-f8e42f3a312e", 00:10:07.233 "is_configured": true, 00:10:07.233 "data_offset": 2048, 00:10:07.233 "data_size": 63488 00:10:07.233 }, 00:10:07.233 { 00:10:07.233 "name": "BaseBdev3", 00:10:07.233 "uuid": "e775492f-685b-4822-9501-645e8317db93", 00:10:07.233 "is_configured": true, 00:10:07.233 "data_offset": 2048, 00:10:07.233 "data_size": 63488 00:10:07.233 }, 00:10:07.233 { 00:10:07.233 "name": "BaseBdev4", 00:10:07.233 "uuid": "a56784fa-aa5c-4ab3-a748-630db4cc9e9c", 00:10:07.233 "is_configured": true, 00:10:07.233 "data_offset": 2048, 00:10:07.233 "data_size": 63488 00:10:07.233 } 00:10:07.233 ] 00:10:07.233 } 00:10:07.233 } 00:10:07.233 }' 00:10:07.233 20:05:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:07.233 20:05:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:07.233 BaseBdev2 00:10:07.233 BaseBdev3 00:10:07.233 BaseBdev4' 00:10:07.233 20:05:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:07.233 20:05:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:07.233 20:05:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:07.233 20:05:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:07.233 20:05:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:07.233 20:05:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.233 20:05:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.233 20:05:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.233 20:05:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:07.233 20:05:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:07.233 20:05:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:07.233 20:05:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:07.233 20:05:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.233 20:05:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.233 20:05:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:07.233 20:05:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.493 20:05:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:07.493 20:05:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:07.493 20:05:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:07.493 20:05:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:07.493 20:05:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.493 20:05:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.493 20:05:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:07.493 20:05:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.493 20:05:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:07.493 20:05:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:07.493 20:05:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:07.493 20:05:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:07.493 20:05:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:07.493 20:05:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.493 20:05:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.493 20:05:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.493 20:05:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:07.493 20:05:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:07.493 20:05:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:07.493 20:05:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.493 20:05:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.493 [2024-12-08 20:05:39.331286] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:07.493 [2024-12-08 20:05:39.331361] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:07.493 [2024-12-08 20:05:39.331455] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:07.493 [2024-12-08 20:05:39.331601] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:07.493 [2024-12-08 20:05:39.331657] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:07.493 20:05:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.493 20:05:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 69862 00:10:07.493 20:05:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 69862 ']' 00:10:07.493 20:05:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 69862 00:10:07.493 20:05:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:10:07.493 20:05:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:07.493 20:05:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69862 00:10:07.493 20:05:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:07.493 20:05:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:07.493 20:05:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69862' 00:10:07.493 killing process with pid 69862 00:10:07.493 20:05:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 69862 00:10:07.493 [2024-12-08 20:05:39.378622] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:07.493 20:05:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 69862 00:10:08.060 [2024-12-08 20:05:39.769737] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:08.994 20:05:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:08.994 00:10:08.994 real 0m11.543s 00:10:08.994 user 0m18.358s 00:10:08.994 sys 0m2.007s 00:10:08.994 20:05:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:08.994 20:05:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.994 ************************************ 00:10:08.994 END TEST raid_state_function_test_sb 00:10:08.994 ************************************ 00:10:08.994 20:05:40 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:10:08.994 20:05:40 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:08.994 20:05:40 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:08.994 20:05:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:08.994 ************************************ 00:10:08.994 START TEST raid_superblock_test 00:10:08.994 ************************************ 00:10:08.994 20:05:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 4 00:10:08.994 20:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:10:08.994 20:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:10:08.994 20:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:08.994 20:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:08.994 20:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:08.994 20:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:08.994 20:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:08.994 20:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:08.994 20:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:08.994 20:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:08.994 20:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:08.994 20:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:08.994 20:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:08.994 20:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:10:08.994 20:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:10:08.994 20:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:10:08.994 20:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=70533 00:10:08.994 20:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:08.994 20:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 70533 00:10:08.994 20:05:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 70533 ']' 00:10:08.994 20:05:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:08.994 20:05:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:08.994 20:05:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:08.994 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:08.994 20:05:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:08.994 20:05:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.252 [2024-12-08 20:05:41.049311] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:10:09.252 [2024-12-08 20:05:41.049425] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70533 ] 00:10:09.252 [2024-12-08 20:05:41.203653] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:09.510 [2024-12-08 20:05:41.318620] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:09.769 [2024-12-08 20:05:41.517919] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:09.769 [2024-12-08 20:05:41.517993] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:10.029 20:05:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:10.029 20:05:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:10:10.029 20:05:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:10.029 20:05:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:10.029 20:05:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:10.029 20:05:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:10.029 20:05:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:10.029 20:05:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:10.029 20:05:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:10.029 20:05:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:10.029 20:05:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:10.029 20:05:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.029 20:05:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.029 malloc1 00:10:10.029 20:05:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.029 20:05:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:10.029 20:05:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.029 20:05:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.029 [2024-12-08 20:05:41.933380] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:10.029 [2024-12-08 20:05:41.933479] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:10.029 [2024-12-08 20:05:41.933518] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:10.029 [2024-12-08 20:05:41.933548] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:10.029 [2024-12-08 20:05:41.935810] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:10.029 [2024-12-08 20:05:41.935885] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:10.029 pt1 00:10:10.029 20:05:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.029 20:05:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:10.029 20:05:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:10.029 20:05:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:10.029 20:05:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:10.029 20:05:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:10.029 20:05:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:10.029 20:05:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:10.029 20:05:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:10.029 20:05:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:10.029 20:05:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.029 20:05:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.029 malloc2 00:10:10.029 20:05:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.029 20:05:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:10.029 20:05:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.029 20:05:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.029 [2024-12-08 20:05:41.993127] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:10.029 [2024-12-08 20:05:41.993223] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:10.029 [2024-12-08 20:05:41.993267] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:10.029 [2024-12-08 20:05:41.993296] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:10.029 [2024-12-08 20:05:41.995496] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:10.029 [2024-12-08 20:05:41.995570] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:10.029 pt2 00:10:10.029 20:05:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.029 20:05:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:10.029 20:05:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:10.029 20:05:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:10:10.029 20:05:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:10:10.029 20:05:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:10:10.029 20:05:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:10.029 20:05:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:10.029 20:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:10.029 20:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:10:10.029 20:05:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.029 20:05:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.289 malloc3 00:10:10.289 20:05:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.289 20:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:10.289 20:05:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.289 20:05:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.289 [2024-12-08 20:05:42.064417] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:10.289 [2024-12-08 20:05:42.064469] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:10.289 [2024-12-08 20:05:42.064491] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:10.289 [2024-12-08 20:05:42.064501] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:10.289 [2024-12-08 20:05:42.066584] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:10.289 [2024-12-08 20:05:42.066621] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:10.289 pt3 00:10:10.289 20:05:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.289 20:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:10.289 20:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:10.289 20:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:10:10.289 20:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:10:10.289 20:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:10:10.289 20:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:10.289 20:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:10.289 20:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:10.289 20:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:10:10.289 20:05:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.289 20:05:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.289 malloc4 00:10:10.289 20:05:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.289 20:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:10.289 20:05:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.289 20:05:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.289 [2024-12-08 20:05:42.116902] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:10.289 [2024-12-08 20:05:42.117014] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:10.289 [2024-12-08 20:05:42.117072] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:10.289 [2024-12-08 20:05:42.117108] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:10.289 [2024-12-08 20:05:42.119193] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:10.289 [2024-12-08 20:05:42.119287] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:10.289 pt4 00:10:10.289 20:05:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.289 20:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:10.289 20:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:10.289 20:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:10:10.289 20:05:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.289 20:05:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.289 [2024-12-08 20:05:42.128919] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:10.289 [2024-12-08 20:05:42.130760] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:10.289 [2024-12-08 20:05:42.130903] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:10.289 [2024-12-08 20:05:42.130990] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:10.289 [2024-12-08 20:05:42.131270] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:10.289 [2024-12-08 20:05:42.131324] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:10.289 [2024-12-08 20:05:42.131661] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:10.289 [2024-12-08 20:05:42.131911] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:10.289 [2024-12-08 20:05:42.131983] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:10:10.289 [2024-12-08 20:05:42.132214] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:10.289 20:05:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.289 20:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:10.289 20:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:10.289 20:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:10.289 20:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:10.289 20:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:10.289 20:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:10.289 20:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:10.289 20:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:10.289 20:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:10.289 20:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:10.289 20:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.289 20:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:10.289 20:05:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.289 20:05:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.289 20:05:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.289 20:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:10.289 "name": "raid_bdev1", 00:10:10.289 "uuid": "111411e7-672d-4ca8-b14a-2ea113255fd9", 00:10:10.289 "strip_size_kb": 64, 00:10:10.289 "state": "online", 00:10:10.290 "raid_level": "raid0", 00:10:10.290 "superblock": true, 00:10:10.290 "num_base_bdevs": 4, 00:10:10.290 "num_base_bdevs_discovered": 4, 00:10:10.290 "num_base_bdevs_operational": 4, 00:10:10.290 "base_bdevs_list": [ 00:10:10.290 { 00:10:10.290 "name": "pt1", 00:10:10.290 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:10.290 "is_configured": true, 00:10:10.290 "data_offset": 2048, 00:10:10.290 "data_size": 63488 00:10:10.290 }, 00:10:10.290 { 00:10:10.290 "name": "pt2", 00:10:10.290 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:10.290 "is_configured": true, 00:10:10.290 "data_offset": 2048, 00:10:10.290 "data_size": 63488 00:10:10.290 }, 00:10:10.290 { 00:10:10.290 "name": "pt3", 00:10:10.290 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:10.290 "is_configured": true, 00:10:10.290 "data_offset": 2048, 00:10:10.290 "data_size": 63488 00:10:10.290 }, 00:10:10.290 { 00:10:10.290 "name": "pt4", 00:10:10.290 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:10.290 "is_configured": true, 00:10:10.290 "data_offset": 2048, 00:10:10.290 "data_size": 63488 00:10:10.290 } 00:10:10.290 ] 00:10:10.290 }' 00:10:10.290 20:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:10.290 20:05:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.859 20:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:10.859 20:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:10.859 20:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:10.859 20:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:10.859 20:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:10.859 20:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:10.860 20:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:10.860 20:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:10.860 20:05:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.860 20:05:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.860 [2024-12-08 20:05:42.552502] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:10.860 20:05:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.860 20:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:10.860 "name": "raid_bdev1", 00:10:10.860 "aliases": [ 00:10:10.860 "111411e7-672d-4ca8-b14a-2ea113255fd9" 00:10:10.860 ], 00:10:10.860 "product_name": "Raid Volume", 00:10:10.860 "block_size": 512, 00:10:10.860 "num_blocks": 253952, 00:10:10.860 "uuid": "111411e7-672d-4ca8-b14a-2ea113255fd9", 00:10:10.860 "assigned_rate_limits": { 00:10:10.860 "rw_ios_per_sec": 0, 00:10:10.860 "rw_mbytes_per_sec": 0, 00:10:10.860 "r_mbytes_per_sec": 0, 00:10:10.860 "w_mbytes_per_sec": 0 00:10:10.860 }, 00:10:10.860 "claimed": false, 00:10:10.860 "zoned": false, 00:10:10.860 "supported_io_types": { 00:10:10.860 "read": true, 00:10:10.860 "write": true, 00:10:10.860 "unmap": true, 00:10:10.860 "flush": true, 00:10:10.860 "reset": true, 00:10:10.860 "nvme_admin": false, 00:10:10.860 "nvme_io": false, 00:10:10.860 "nvme_io_md": false, 00:10:10.860 "write_zeroes": true, 00:10:10.860 "zcopy": false, 00:10:10.860 "get_zone_info": false, 00:10:10.860 "zone_management": false, 00:10:10.860 "zone_append": false, 00:10:10.860 "compare": false, 00:10:10.860 "compare_and_write": false, 00:10:10.860 "abort": false, 00:10:10.860 "seek_hole": false, 00:10:10.860 "seek_data": false, 00:10:10.860 "copy": false, 00:10:10.860 "nvme_iov_md": false 00:10:10.860 }, 00:10:10.860 "memory_domains": [ 00:10:10.860 { 00:10:10.860 "dma_device_id": "system", 00:10:10.860 "dma_device_type": 1 00:10:10.860 }, 00:10:10.860 { 00:10:10.860 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:10.860 "dma_device_type": 2 00:10:10.860 }, 00:10:10.860 { 00:10:10.860 "dma_device_id": "system", 00:10:10.860 "dma_device_type": 1 00:10:10.860 }, 00:10:10.860 { 00:10:10.860 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:10.860 "dma_device_type": 2 00:10:10.860 }, 00:10:10.860 { 00:10:10.860 "dma_device_id": "system", 00:10:10.860 "dma_device_type": 1 00:10:10.860 }, 00:10:10.860 { 00:10:10.860 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:10.860 "dma_device_type": 2 00:10:10.860 }, 00:10:10.860 { 00:10:10.860 "dma_device_id": "system", 00:10:10.860 "dma_device_type": 1 00:10:10.860 }, 00:10:10.860 { 00:10:10.860 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:10.860 "dma_device_type": 2 00:10:10.860 } 00:10:10.860 ], 00:10:10.860 "driver_specific": { 00:10:10.860 "raid": { 00:10:10.860 "uuid": "111411e7-672d-4ca8-b14a-2ea113255fd9", 00:10:10.860 "strip_size_kb": 64, 00:10:10.860 "state": "online", 00:10:10.860 "raid_level": "raid0", 00:10:10.860 "superblock": true, 00:10:10.860 "num_base_bdevs": 4, 00:10:10.860 "num_base_bdevs_discovered": 4, 00:10:10.860 "num_base_bdevs_operational": 4, 00:10:10.860 "base_bdevs_list": [ 00:10:10.860 { 00:10:10.860 "name": "pt1", 00:10:10.860 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:10.860 "is_configured": true, 00:10:10.860 "data_offset": 2048, 00:10:10.860 "data_size": 63488 00:10:10.860 }, 00:10:10.860 { 00:10:10.860 "name": "pt2", 00:10:10.860 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:10.860 "is_configured": true, 00:10:10.860 "data_offset": 2048, 00:10:10.860 "data_size": 63488 00:10:10.860 }, 00:10:10.860 { 00:10:10.860 "name": "pt3", 00:10:10.860 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:10.860 "is_configured": true, 00:10:10.860 "data_offset": 2048, 00:10:10.860 "data_size": 63488 00:10:10.860 }, 00:10:10.860 { 00:10:10.860 "name": "pt4", 00:10:10.860 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:10.860 "is_configured": true, 00:10:10.860 "data_offset": 2048, 00:10:10.860 "data_size": 63488 00:10:10.860 } 00:10:10.860 ] 00:10:10.860 } 00:10:10.860 } 00:10:10.860 }' 00:10:10.860 20:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:10.860 20:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:10.860 pt2 00:10:10.860 pt3 00:10:10.860 pt4' 00:10:10.860 20:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:10.860 20:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:10.860 20:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:10.860 20:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:10.860 20:05:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.860 20:05:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.860 20:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:10.860 20:05:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.860 20:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:10.860 20:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:10.860 20:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:10.860 20:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:10.860 20:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:10.860 20:05:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.860 20:05:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.860 20:05:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.860 20:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:10.860 20:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:10.860 20:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:10.860 20:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:10.860 20:05:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.860 20:05:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.860 20:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:10.860 20:05:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.860 20:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:10.860 20:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:10.860 20:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:10.860 20:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:10.860 20:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:10:10.860 20:05:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.860 20:05:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.121 20:05:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.121 20:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:11.121 20:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:11.121 20:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:11.121 20:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:11.121 20:05:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.121 20:05:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.121 [2024-12-08 20:05:42.883920] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:11.121 20:05:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.121 20:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=111411e7-672d-4ca8-b14a-2ea113255fd9 00:10:11.121 20:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 111411e7-672d-4ca8-b14a-2ea113255fd9 ']' 00:10:11.121 20:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:11.121 20:05:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.121 20:05:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.121 [2024-12-08 20:05:42.931529] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:11.121 [2024-12-08 20:05:42.931597] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:11.121 [2024-12-08 20:05:42.931697] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:11.121 [2024-12-08 20:05:42.931793] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:11.121 [2024-12-08 20:05:42.931859] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:10:11.121 20:05:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.121 20:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.121 20:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:11.121 20:05:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.121 20:05:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.121 20:05:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.121 20:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:11.121 20:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:11.121 20:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:11.121 20:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:10:11.121 20:05:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.121 20:05:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.121 20:05:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.121 20:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:11.121 20:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:10:11.121 20:05:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.121 20:05:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.121 20:05:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.121 20:05:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:11.121 20:05:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:10:11.121 20:05:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.121 20:05:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.121 20:05:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.121 20:05:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:11.121 20:05:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:10:11.121 20:05:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.121 20:05:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.121 20:05:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.121 20:05:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:11.121 20:05:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:10:11.121 20:05:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.121 20:05:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.121 20:05:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.121 20:05:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:11.121 20:05:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:11.121 20:05:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:10:11.121 20:05:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:11.121 20:05:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:10:11.121 20:05:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:11.121 20:05:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:10:11.121 20:05:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:11.121 20:05:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:11.121 20:05:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.122 20:05:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.381 [2024-12-08 20:05:43.099334] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:11.381 [2024-12-08 20:05:43.101370] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:11.381 [2024-12-08 20:05:43.101420] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:10:11.381 [2024-12-08 20:05:43.101453] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:10:11.381 [2024-12-08 20:05:43.101501] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:11.381 [2024-12-08 20:05:43.101554] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:11.381 [2024-12-08 20:05:43.101573] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:10:11.381 [2024-12-08 20:05:43.101591] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:10:11.381 [2024-12-08 20:05:43.101605] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:11.381 [2024-12-08 20:05:43.101618] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:10:11.381 request: 00:10:11.381 { 00:10:11.381 "name": "raid_bdev1", 00:10:11.381 "raid_level": "raid0", 00:10:11.381 "base_bdevs": [ 00:10:11.381 "malloc1", 00:10:11.381 "malloc2", 00:10:11.381 "malloc3", 00:10:11.381 "malloc4" 00:10:11.381 ], 00:10:11.381 "strip_size_kb": 64, 00:10:11.381 "superblock": false, 00:10:11.381 "method": "bdev_raid_create", 00:10:11.381 "req_id": 1 00:10:11.381 } 00:10:11.381 Got JSON-RPC error response 00:10:11.381 response: 00:10:11.381 { 00:10:11.381 "code": -17, 00:10:11.381 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:11.381 } 00:10:11.381 20:05:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:11.381 20:05:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:10:11.381 20:05:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:11.381 20:05:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:11.381 20:05:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:11.381 20:05:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.381 20:05:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.381 20:05:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.381 20:05:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:11.381 20:05:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.381 20:05:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:11.381 20:05:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:11.381 20:05:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:11.381 20:05:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.381 20:05:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.381 [2024-12-08 20:05:43.163245] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:11.381 [2024-12-08 20:05:43.163343] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:11.381 [2024-12-08 20:05:43.163379] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:11.381 [2024-12-08 20:05:43.163417] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:11.381 [2024-12-08 20:05:43.165631] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:11.381 [2024-12-08 20:05:43.165711] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:11.381 [2024-12-08 20:05:43.165830] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:11.381 [2024-12-08 20:05:43.165932] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:11.381 pt1 00:10:11.381 20:05:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.381 20:05:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:10:11.381 20:05:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:11.381 20:05:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:11.381 20:05:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:11.381 20:05:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:11.382 20:05:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:11.382 20:05:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:11.382 20:05:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:11.382 20:05:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:11.382 20:05:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:11.382 20:05:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.382 20:05:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:11.382 20:05:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.382 20:05:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.382 20:05:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.382 20:05:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:11.382 "name": "raid_bdev1", 00:10:11.382 "uuid": "111411e7-672d-4ca8-b14a-2ea113255fd9", 00:10:11.382 "strip_size_kb": 64, 00:10:11.382 "state": "configuring", 00:10:11.382 "raid_level": "raid0", 00:10:11.382 "superblock": true, 00:10:11.382 "num_base_bdevs": 4, 00:10:11.382 "num_base_bdevs_discovered": 1, 00:10:11.382 "num_base_bdevs_operational": 4, 00:10:11.382 "base_bdevs_list": [ 00:10:11.382 { 00:10:11.382 "name": "pt1", 00:10:11.382 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:11.382 "is_configured": true, 00:10:11.382 "data_offset": 2048, 00:10:11.382 "data_size": 63488 00:10:11.382 }, 00:10:11.382 { 00:10:11.382 "name": null, 00:10:11.382 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:11.382 "is_configured": false, 00:10:11.382 "data_offset": 2048, 00:10:11.382 "data_size": 63488 00:10:11.382 }, 00:10:11.382 { 00:10:11.382 "name": null, 00:10:11.382 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:11.382 "is_configured": false, 00:10:11.382 "data_offset": 2048, 00:10:11.382 "data_size": 63488 00:10:11.382 }, 00:10:11.382 { 00:10:11.382 "name": null, 00:10:11.382 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:11.382 "is_configured": false, 00:10:11.382 "data_offset": 2048, 00:10:11.382 "data_size": 63488 00:10:11.382 } 00:10:11.382 ] 00:10:11.382 }' 00:10:11.382 20:05:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:11.382 20:05:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.642 20:05:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:10:11.642 20:05:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:11.642 20:05:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.642 20:05:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.642 [2024-12-08 20:05:43.586610] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:11.642 [2024-12-08 20:05:43.586765] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:11.642 [2024-12-08 20:05:43.586792] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:10:11.642 [2024-12-08 20:05:43.586804] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:11.642 [2024-12-08 20:05:43.587325] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:11.642 [2024-12-08 20:05:43.587357] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:11.643 [2024-12-08 20:05:43.587446] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:11.643 [2024-12-08 20:05:43.587473] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:11.643 pt2 00:10:11.643 20:05:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.643 20:05:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:10:11.643 20:05:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.643 20:05:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.643 [2024-12-08 20:05:43.598604] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:10:11.643 20:05:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.643 20:05:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:10:11.643 20:05:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:11.643 20:05:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:11.643 20:05:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:11.643 20:05:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:11.643 20:05:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:11.643 20:05:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:11.643 20:05:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:11.643 20:05:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:11.643 20:05:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:11.643 20:05:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.643 20:05:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:11.643 20:05:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.643 20:05:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.903 20:05:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.903 20:05:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:11.903 "name": "raid_bdev1", 00:10:11.903 "uuid": "111411e7-672d-4ca8-b14a-2ea113255fd9", 00:10:11.903 "strip_size_kb": 64, 00:10:11.903 "state": "configuring", 00:10:11.903 "raid_level": "raid0", 00:10:11.903 "superblock": true, 00:10:11.903 "num_base_bdevs": 4, 00:10:11.903 "num_base_bdevs_discovered": 1, 00:10:11.903 "num_base_bdevs_operational": 4, 00:10:11.903 "base_bdevs_list": [ 00:10:11.903 { 00:10:11.903 "name": "pt1", 00:10:11.903 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:11.903 "is_configured": true, 00:10:11.903 "data_offset": 2048, 00:10:11.903 "data_size": 63488 00:10:11.903 }, 00:10:11.903 { 00:10:11.903 "name": null, 00:10:11.903 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:11.903 "is_configured": false, 00:10:11.903 "data_offset": 0, 00:10:11.903 "data_size": 63488 00:10:11.903 }, 00:10:11.903 { 00:10:11.903 "name": null, 00:10:11.903 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:11.903 "is_configured": false, 00:10:11.903 "data_offset": 2048, 00:10:11.903 "data_size": 63488 00:10:11.903 }, 00:10:11.903 { 00:10:11.903 "name": null, 00:10:11.903 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:11.903 "is_configured": false, 00:10:11.903 "data_offset": 2048, 00:10:11.903 "data_size": 63488 00:10:11.903 } 00:10:11.903 ] 00:10:11.903 }' 00:10:11.903 20:05:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:11.903 20:05:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.164 20:05:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:10:12.164 20:05:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:12.164 20:05:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:12.164 20:05:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.164 20:05:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.164 [2024-12-08 20:05:44.085774] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:12.164 [2024-12-08 20:05:44.085900] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:12.164 [2024-12-08 20:05:44.085939] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:10:12.164 [2024-12-08 20:05:44.085994] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:12.164 [2024-12-08 20:05:44.086499] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:12.164 [2024-12-08 20:05:44.086566] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:12.164 [2024-12-08 20:05:44.086702] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:12.164 [2024-12-08 20:05:44.086756] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:12.164 pt2 00:10:12.164 20:05:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.164 20:05:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:12.164 20:05:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:12.164 20:05:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:12.164 20:05:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.164 20:05:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.164 [2024-12-08 20:05:44.097705] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:12.164 [2024-12-08 20:05:44.097752] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:12.164 [2024-12-08 20:05:44.097769] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:10:12.164 [2024-12-08 20:05:44.097777] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:12.164 [2024-12-08 20:05:44.098135] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:12.164 [2024-12-08 20:05:44.098160] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:12.164 [2024-12-08 20:05:44.098223] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:12.164 [2024-12-08 20:05:44.098247] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:12.164 pt3 00:10:12.164 20:05:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.164 20:05:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:12.164 20:05:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:12.164 20:05:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:12.164 20:05:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.164 20:05:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.164 [2024-12-08 20:05:44.109660] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:12.164 [2024-12-08 20:05:44.109702] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:12.164 [2024-12-08 20:05:44.109717] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:10:12.164 [2024-12-08 20:05:44.109725] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:12.164 [2024-12-08 20:05:44.110085] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:12.164 [2024-12-08 20:05:44.110102] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:12.164 [2024-12-08 20:05:44.110160] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:10:12.164 [2024-12-08 20:05:44.110180] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:12.164 [2024-12-08 20:05:44.110320] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:12.164 [2024-12-08 20:05:44.110334] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:12.164 [2024-12-08 20:05:44.110582] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:10:12.164 [2024-12-08 20:05:44.110734] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:12.164 [2024-12-08 20:05:44.110748] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:10:12.164 [2024-12-08 20:05:44.110915] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:12.164 pt4 00:10:12.164 20:05:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.165 20:05:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:12.165 20:05:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:12.165 20:05:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:12.165 20:05:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:12.165 20:05:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:12.165 20:05:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:12.165 20:05:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:12.165 20:05:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:12.165 20:05:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:12.165 20:05:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:12.165 20:05:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:12.165 20:05:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:12.165 20:05:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.165 20:05:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.165 20:05:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.165 20:05:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:12.425 20:05:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.425 20:05:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:12.425 "name": "raid_bdev1", 00:10:12.425 "uuid": "111411e7-672d-4ca8-b14a-2ea113255fd9", 00:10:12.425 "strip_size_kb": 64, 00:10:12.425 "state": "online", 00:10:12.425 "raid_level": "raid0", 00:10:12.425 "superblock": true, 00:10:12.425 "num_base_bdevs": 4, 00:10:12.425 "num_base_bdevs_discovered": 4, 00:10:12.425 "num_base_bdevs_operational": 4, 00:10:12.425 "base_bdevs_list": [ 00:10:12.425 { 00:10:12.425 "name": "pt1", 00:10:12.425 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:12.425 "is_configured": true, 00:10:12.425 "data_offset": 2048, 00:10:12.425 "data_size": 63488 00:10:12.425 }, 00:10:12.425 { 00:10:12.425 "name": "pt2", 00:10:12.425 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:12.425 "is_configured": true, 00:10:12.425 "data_offset": 2048, 00:10:12.425 "data_size": 63488 00:10:12.425 }, 00:10:12.425 { 00:10:12.425 "name": "pt3", 00:10:12.425 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:12.425 "is_configured": true, 00:10:12.425 "data_offset": 2048, 00:10:12.425 "data_size": 63488 00:10:12.425 }, 00:10:12.425 { 00:10:12.425 "name": "pt4", 00:10:12.425 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:12.425 "is_configured": true, 00:10:12.425 "data_offset": 2048, 00:10:12.425 "data_size": 63488 00:10:12.425 } 00:10:12.425 ] 00:10:12.425 }' 00:10:12.425 20:05:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:12.425 20:05:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.685 20:05:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:10:12.685 20:05:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:12.685 20:05:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:12.685 20:05:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:12.685 20:05:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:12.685 20:05:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:12.685 20:05:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:12.685 20:05:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:12.685 20:05:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.685 20:05:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.685 [2024-12-08 20:05:44.573305] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:12.685 20:05:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.685 20:05:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:12.685 "name": "raid_bdev1", 00:10:12.685 "aliases": [ 00:10:12.685 "111411e7-672d-4ca8-b14a-2ea113255fd9" 00:10:12.685 ], 00:10:12.685 "product_name": "Raid Volume", 00:10:12.685 "block_size": 512, 00:10:12.685 "num_blocks": 253952, 00:10:12.685 "uuid": "111411e7-672d-4ca8-b14a-2ea113255fd9", 00:10:12.685 "assigned_rate_limits": { 00:10:12.685 "rw_ios_per_sec": 0, 00:10:12.685 "rw_mbytes_per_sec": 0, 00:10:12.685 "r_mbytes_per_sec": 0, 00:10:12.685 "w_mbytes_per_sec": 0 00:10:12.685 }, 00:10:12.685 "claimed": false, 00:10:12.685 "zoned": false, 00:10:12.685 "supported_io_types": { 00:10:12.685 "read": true, 00:10:12.685 "write": true, 00:10:12.685 "unmap": true, 00:10:12.685 "flush": true, 00:10:12.685 "reset": true, 00:10:12.685 "nvme_admin": false, 00:10:12.685 "nvme_io": false, 00:10:12.685 "nvme_io_md": false, 00:10:12.685 "write_zeroes": true, 00:10:12.685 "zcopy": false, 00:10:12.685 "get_zone_info": false, 00:10:12.685 "zone_management": false, 00:10:12.685 "zone_append": false, 00:10:12.685 "compare": false, 00:10:12.685 "compare_and_write": false, 00:10:12.685 "abort": false, 00:10:12.685 "seek_hole": false, 00:10:12.685 "seek_data": false, 00:10:12.685 "copy": false, 00:10:12.685 "nvme_iov_md": false 00:10:12.685 }, 00:10:12.685 "memory_domains": [ 00:10:12.685 { 00:10:12.685 "dma_device_id": "system", 00:10:12.685 "dma_device_type": 1 00:10:12.685 }, 00:10:12.685 { 00:10:12.685 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:12.685 "dma_device_type": 2 00:10:12.685 }, 00:10:12.685 { 00:10:12.685 "dma_device_id": "system", 00:10:12.685 "dma_device_type": 1 00:10:12.685 }, 00:10:12.685 { 00:10:12.685 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:12.685 "dma_device_type": 2 00:10:12.685 }, 00:10:12.685 { 00:10:12.685 "dma_device_id": "system", 00:10:12.685 "dma_device_type": 1 00:10:12.685 }, 00:10:12.685 { 00:10:12.685 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:12.685 "dma_device_type": 2 00:10:12.685 }, 00:10:12.685 { 00:10:12.685 "dma_device_id": "system", 00:10:12.685 "dma_device_type": 1 00:10:12.685 }, 00:10:12.685 { 00:10:12.685 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:12.685 "dma_device_type": 2 00:10:12.685 } 00:10:12.685 ], 00:10:12.685 "driver_specific": { 00:10:12.685 "raid": { 00:10:12.685 "uuid": "111411e7-672d-4ca8-b14a-2ea113255fd9", 00:10:12.685 "strip_size_kb": 64, 00:10:12.685 "state": "online", 00:10:12.685 "raid_level": "raid0", 00:10:12.685 "superblock": true, 00:10:12.685 "num_base_bdevs": 4, 00:10:12.685 "num_base_bdevs_discovered": 4, 00:10:12.685 "num_base_bdevs_operational": 4, 00:10:12.685 "base_bdevs_list": [ 00:10:12.685 { 00:10:12.685 "name": "pt1", 00:10:12.685 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:12.685 "is_configured": true, 00:10:12.685 "data_offset": 2048, 00:10:12.685 "data_size": 63488 00:10:12.685 }, 00:10:12.685 { 00:10:12.685 "name": "pt2", 00:10:12.685 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:12.685 "is_configured": true, 00:10:12.685 "data_offset": 2048, 00:10:12.685 "data_size": 63488 00:10:12.685 }, 00:10:12.685 { 00:10:12.685 "name": "pt3", 00:10:12.685 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:12.685 "is_configured": true, 00:10:12.685 "data_offset": 2048, 00:10:12.685 "data_size": 63488 00:10:12.685 }, 00:10:12.685 { 00:10:12.685 "name": "pt4", 00:10:12.685 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:12.685 "is_configured": true, 00:10:12.685 "data_offset": 2048, 00:10:12.685 "data_size": 63488 00:10:12.685 } 00:10:12.685 ] 00:10:12.685 } 00:10:12.685 } 00:10:12.685 }' 00:10:12.685 20:05:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:12.945 20:05:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:12.945 pt2 00:10:12.945 pt3 00:10:12.945 pt4' 00:10:12.945 20:05:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:12.945 20:05:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:12.945 20:05:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:12.945 20:05:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:12.945 20:05:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.945 20:05:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:12.945 20:05:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.945 20:05:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.945 20:05:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:12.945 20:05:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:12.945 20:05:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:12.945 20:05:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:12.945 20:05:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.945 20:05:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.945 20:05:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:12.945 20:05:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.945 20:05:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:12.945 20:05:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:12.945 20:05:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:12.945 20:05:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:12.945 20:05:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.945 20:05:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.945 20:05:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:12.945 20:05:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.945 20:05:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:12.945 20:05:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:12.945 20:05:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:12.945 20:05:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:10:12.945 20:05:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.945 20:05:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.945 20:05:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:12.945 20:05:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.945 20:05:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:12.945 20:05:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:12.945 20:05:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:12.945 20:05:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.945 20:05:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.945 20:05:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:10:12.945 [2024-12-08 20:05:44.888715] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:12.945 20:05:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.945 20:05:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 111411e7-672d-4ca8-b14a-2ea113255fd9 '!=' 111411e7-672d-4ca8-b14a-2ea113255fd9 ']' 00:10:12.945 20:05:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:10:12.945 20:05:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:12.945 20:05:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:12.945 20:05:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 70533 00:10:12.945 20:05:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 70533 ']' 00:10:12.945 20:05:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 70533 00:10:12.945 20:05:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:10:12.945 20:05:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:12.945 20:05:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70533 00:10:13.204 20:05:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:13.205 20:05:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:13.205 20:05:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70533' 00:10:13.205 killing process with pid 70533 00:10:13.205 20:05:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 70533 00:10:13.205 [2024-12-08 20:05:44.936865] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:13.205 [2024-12-08 20:05:44.937024] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:13.205 20:05:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 70533 00:10:13.205 [2024-12-08 20:05:44.937140] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:13.205 [2024-12-08 20:05:44.937152] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:10:13.463 [2024-12-08 20:05:45.331360] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:14.846 20:05:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:10:14.846 00:10:14.846 real 0m5.500s 00:10:14.846 user 0m7.880s 00:10:14.846 sys 0m0.894s 00:10:14.846 20:05:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:14.846 20:05:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.846 ************************************ 00:10:14.846 END TEST raid_superblock_test 00:10:14.846 ************************************ 00:10:14.846 20:05:46 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:10:14.846 20:05:46 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:14.846 20:05:46 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:14.846 20:05:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:14.846 ************************************ 00:10:14.846 START TEST raid_read_error_test 00:10:14.846 ************************************ 00:10:14.846 20:05:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 4 read 00:10:14.846 20:05:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:10:14.846 20:05:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:10:14.846 20:05:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:10:14.846 20:05:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:14.846 20:05:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:14.846 20:05:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:14.846 20:05:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:14.846 20:05:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:14.846 20:05:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:14.846 20:05:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:14.846 20:05:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:14.846 20:05:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:14.846 20:05:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:14.846 20:05:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:14.846 20:05:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:10:14.846 20:05:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:14.846 20:05:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:14.846 20:05:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:14.846 20:05:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:14.846 20:05:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:14.846 20:05:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:14.846 20:05:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:14.846 20:05:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:14.846 20:05:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:14.846 20:05:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:10:14.846 20:05:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:14.846 20:05:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:14.846 20:05:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:14.846 20:05:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.VwHRpt8HtW 00:10:14.846 20:05:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=70792 00:10:14.846 20:05:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:14.846 20:05:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 70792 00:10:14.846 20:05:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 70792 ']' 00:10:14.846 20:05:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:14.846 20:05:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:14.846 20:05:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:14.846 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:14.846 20:05:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:14.846 20:05:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.846 [2024-12-08 20:05:46.635270] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:10:14.846 [2024-12-08 20:05:46.635529] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70792 ] 00:10:14.846 [2024-12-08 20:05:46.811090] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:15.106 [2024-12-08 20:05:46.927583] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:15.367 [2024-12-08 20:05:47.132685] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:15.367 [2024-12-08 20:05:47.132831] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:15.627 20:05:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:15.627 20:05:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:15.627 20:05:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:15.627 20:05:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:15.627 20:05:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.627 20:05:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.627 BaseBdev1_malloc 00:10:15.627 20:05:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.627 20:05:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:15.627 20:05:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.627 20:05:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.627 true 00:10:15.627 20:05:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.627 20:05:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:15.627 20:05:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.627 20:05:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.627 [2024-12-08 20:05:47.517186] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:15.627 [2024-12-08 20:05:47.517240] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:15.627 [2024-12-08 20:05:47.517259] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:15.627 [2024-12-08 20:05:47.517269] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:15.627 [2024-12-08 20:05:47.519478] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:15.627 [2024-12-08 20:05:47.519519] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:15.627 BaseBdev1 00:10:15.627 20:05:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.627 20:05:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:15.627 20:05:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:15.627 20:05:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.627 20:05:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.627 BaseBdev2_malloc 00:10:15.627 20:05:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.627 20:05:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:15.627 20:05:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.627 20:05:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.627 true 00:10:15.627 20:05:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.627 20:05:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:15.627 20:05:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.627 20:05:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.627 [2024-12-08 20:05:47.584330] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:15.627 [2024-12-08 20:05:47.584393] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:15.627 [2024-12-08 20:05:47.584425] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:15.627 [2024-12-08 20:05:47.584436] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:15.627 [2024-12-08 20:05:47.586546] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:15.627 [2024-12-08 20:05:47.586628] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:15.627 BaseBdev2 00:10:15.627 20:05:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.627 20:05:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:15.627 20:05:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:15.627 20:05:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.627 20:05:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.887 BaseBdev3_malloc 00:10:15.887 20:05:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.887 20:05:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:15.887 20:05:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.887 20:05:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.887 true 00:10:15.887 20:05:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.887 20:05:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:15.887 20:05:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.887 20:05:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.887 [2024-12-08 20:05:47.664648] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:15.887 [2024-12-08 20:05:47.664702] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:15.887 [2024-12-08 20:05:47.664719] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:15.887 [2024-12-08 20:05:47.664729] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:15.887 [2024-12-08 20:05:47.667026] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:15.887 [2024-12-08 20:05:47.667142] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:15.887 BaseBdev3 00:10:15.887 20:05:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.887 20:05:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:15.887 20:05:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:10:15.887 20:05:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.887 20:05:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.887 BaseBdev4_malloc 00:10:15.887 20:05:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.887 20:05:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:10:15.887 20:05:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.887 20:05:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.887 true 00:10:15.887 20:05:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.887 20:05:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:10:15.887 20:05:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.887 20:05:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.887 [2024-12-08 20:05:47.733213] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:10:15.887 [2024-12-08 20:05:47.733277] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:15.887 [2024-12-08 20:05:47.733310] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:15.887 [2024-12-08 20:05:47.733320] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:15.887 [2024-12-08 20:05:47.735531] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:15.887 [2024-12-08 20:05:47.735630] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:10:15.887 BaseBdev4 00:10:15.888 20:05:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.888 20:05:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:10:15.888 20:05:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.888 20:05:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.888 [2024-12-08 20:05:47.745249] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:15.888 [2024-12-08 20:05:47.747433] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:15.888 [2024-12-08 20:05:47.747583] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:15.888 [2024-12-08 20:05:47.747673] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:15.888 [2024-12-08 20:05:47.747966] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:10:15.888 [2024-12-08 20:05:47.748023] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:15.888 [2024-12-08 20:05:47.748331] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:10:15.888 [2024-12-08 20:05:47.748556] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:10:15.888 [2024-12-08 20:05:47.748602] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:10:15.888 [2024-12-08 20:05:47.748853] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:15.888 20:05:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.888 20:05:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:15.888 20:05:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:15.888 20:05:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:15.888 20:05:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:15.888 20:05:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:15.888 20:05:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:15.888 20:05:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:15.888 20:05:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:15.888 20:05:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:15.888 20:05:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:15.888 20:05:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.888 20:05:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:15.888 20:05:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.888 20:05:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.888 20:05:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.888 20:05:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:15.888 "name": "raid_bdev1", 00:10:15.888 "uuid": "d9a7e063-7c2a-4ac3-976d-a3903fade4e0", 00:10:15.888 "strip_size_kb": 64, 00:10:15.888 "state": "online", 00:10:15.888 "raid_level": "raid0", 00:10:15.888 "superblock": true, 00:10:15.888 "num_base_bdevs": 4, 00:10:15.888 "num_base_bdevs_discovered": 4, 00:10:15.888 "num_base_bdevs_operational": 4, 00:10:15.888 "base_bdevs_list": [ 00:10:15.888 { 00:10:15.888 "name": "BaseBdev1", 00:10:15.888 "uuid": "b9608608-991a-517a-ab25-dfb880c2c6c1", 00:10:15.888 "is_configured": true, 00:10:15.888 "data_offset": 2048, 00:10:15.888 "data_size": 63488 00:10:15.888 }, 00:10:15.888 { 00:10:15.888 "name": "BaseBdev2", 00:10:15.888 "uuid": "626d1b2e-ff79-58e9-a60e-78ad07a43a8b", 00:10:15.888 "is_configured": true, 00:10:15.888 "data_offset": 2048, 00:10:15.888 "data_size": 63488 00:10:15.888 }, 00:10:15.888 { 00:10:15.888 "name": "BaseBdev3", 00:10:15.888 "uuid": "f76f5e08-27b6-538b-b4fa-4b83f6516fd8", 00:10:15.888 "is_configured": true, 00:10:15.888 "data_offset": 2048, 00:10:15.888 "data_size": 63488 00:10:15.888 }, 00:10:15.888 { 00:10:15.888 "name": "BaseBdev4", 00:10:15.888 "uuid": "dd501998-2eba-5741-86f0-e434821f796e", 00:10:15.888 "is_configured": true, 00:10:15.888 "data_offset": 2048, 00:10:15.888 "data_size": 63488 00:10:15.888 } 00:10:15.888 ] 00:10:15.888 }' 00:10:15.888 20:05:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:15.888 20:05:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.456 20:05:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:16.456 20:05:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:16.456 [2024-12-08 20:05:48.229591] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:10:17.393 20:05:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:17.393 20:05:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.393 20:05:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.393 20:05:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.393 20:05:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:17.393 20:05:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:10:17.393 20:05:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:10:17.393 20:05:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:17.393 20:05:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:17.393 20:05:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:17.393 20:05:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:17.393 20:05:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:17.393 20:05:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:17.393 20:05:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:17.393 20:05:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:17.393 20:05:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:17.393 20:05:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:17.393 20:05:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.393 20:05:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.393 20:05:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.393 20:05:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:17.393 20:05:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.393 20:05:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:17.393 "name": "raid_bdev1", 00:10:17.393 "uuid": "d9a7e063-7c2a-4ac3-976d-a3903fade4e0", 00:10:17.393 "strip_size_kb": 64, 00:10:17.393 "state": "online", 00:10:17.393 "raid_level": "raid0", 00:10:17.393 "superblock": true, 00:10:17.393 "num_base_bdevs": 4, 00:10:17.393 "num_base_bdevs_discovered": 4, 00:10:17.393 "num_base_bdevs_operational": 4, 00:10:17.394 "base_bdevs_list": [ 00:10:17.394 { 00:10:17.394 "name": "BaseBdev1", 00:10:17.394 "uuid": "b9608608-991a-517a-ab25-dfb880c2c6c1", 00:10:17.394 "is_configured": true, 00:10:17.394 "data_offset": 2048, 00:10:17.394 "data_size": 63488 00:10:17.394 }, 00:10:17.394 { 00:10:17.394 "name": "BaseBdev2", 00:10:17.394 "uuid": "626d1b2e-ff79-58e9-a60e-78ad07a43a8b", 00:10:17.394 "is_configured": true, 00:10:17.394 "data_offset": 2048, 00:10:17.394 "data_size": 63488 00:10:17.394 }, 00:10:17.394 { 00:10:17.394 "name": "BaseBdev3", 00:10:17.394 "uuid": "f76f5e08-27b6-538b-b4fa-4b83f6516fd8", 00:10:17.394 "is_configured": true, 00:10:17.394 "data_offset": 2048, 00:10:17.394 "data_size": 63488 00:10:17.394 }, 00:10:17.394 { 00:10:17.394 "name": "BaseBdev4", 00:10:17.394 "uuid": "dd501998-2eba-5741-86f0-e434821f796e", 00:10:17.394 "is_configured": true, 00:10:17.394 "data_offset": 2048, 00:10:17.394 "data_size": 63488 00:10:17.394 } 00:10:17.394 ] 00:10:17.394 }' 00:10:17.394 20:05:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:17.394 20:05:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.652 20:05:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:17.652 20:05:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.652 20:05:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.652 [2024-12-08 20:05:49.537255] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:17.652 [2024-12-08 20:05:49.537372] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:17.652 [2024-12-08 20:05:49.540369] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:17.652 [2024-12-08 20:05:49.540488] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:17.652 [2024-12-08 20:05:49.540553] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:17.652 [2024-12-08 20:05:49.540635] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:10:17.652 { 00:10:17.652 "results": [ 00:10:17.652 { 00:10:17.652 "job": "raid_bdev1", 00:10:17.652 "core_mask": "0x1", 00:10:17.652 "workload": "randrw", 00:10:17.652 "percentage": 50, 00:10:17.652 "status": "finished", 00:10:17.652 "queue_depth": 1, 00:10:17.652 "io_size": 131072, 00:10:17.652 "runtime": 1.30849, 00:10:17.652 "iops": 14904.202554089065, 00:10:17.652 "mibps": 1863.025319261133, 00:10:17.652 "io_failed": 1, 00:10:17.652 "io_timeout": 0, 00:10:17.652 "avg_latency_us": 93.1331937511797, 00:10:17.652 "min_latency_us": 27.388646288209607, 00:10:17.652 "max_latency_us": 1409.4532751091704 00:10:17.652 } 00:10:17.652 ], 00:10:17.652 "core_count": 1 00:10:17.652 } 00:10:17.652 20:05:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.652 20:05:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 70792 00:10:17.652 20:05:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 70792 ']' 00:10:17.652 20:05:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 70792 00:10:17.652 20:05:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:10:17.652 20:05:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:17.652 20:05:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70792 00:10:17.652 20:05:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:17.652 20:05:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:17.652 20:05:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70792' 00:10:17.652 killing process with pid 70792 00:10:17.652 20:05:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 70792 00:10:17.652 [2024-12-08 20:05:49.574521] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:17.652 20:05:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 70792 00:10:18.220 [2024-12-08 20:05:49.898233] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:19.157 20:05:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.VwHRpt8HtW 00:10:19.157 20:05:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:19.157 20:05:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:19.157 20:05:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.76 00:10:19.157 20:05:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:10:19.157 ************************************ 00:10:19.157 END TEST raid_read_error_test 00:10:19.157 ************************************ 00:10:19.157 20:05:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:19.157 20:05:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:19.157 20:05:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.76 != \0\.\0\0 ]] 00:10:19.157 00:10:19.157 real 0m4.552s 00:10:19.157 user 0m5.240s 00:10:19.157 sys 0m0.566s 00:10:19.157 20:05:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:19.157 20:05:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.415 20:05:51 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:10:19.415 20:05:51 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:19.415 20:05:51 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:19.415 20:05:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:19.415 ************************************ 00:10:19.415 START TEST raid_write_error_test 00:10:19.415 ************************************ 00:10:19.415 20:05:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 4 write 00:10:19.415 20:05:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:10:19.415 20:05:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:10:19.415 20:05:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:10:19.415 20:05:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:19.415 20:05:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:19.415 20:05:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:19.415 20:05:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:19.416 20:05:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:19.416 20:05:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:19.416 20:05:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:19.416 20:05:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:19.416 20:05:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:19.416 20:05:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:19.416 20:05:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:19.416 20:05:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:10:19.416 20:05:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:19.416 20:05:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:19.416 20:05:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:19.416 20:05:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:19.416 20:05:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:19.416 20:05:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:19.416 20:05:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:19.416 20:05:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:19.416 20:05:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:19.416 20:05:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:10:19.416 20:05:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:19.416 20:05:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:19.416 20:05:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:19.416 20:05:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.tUYk8LZi3X 00:10:19.416 20:05:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=70942 00:10:19.416 20:05:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 70942 00:10:19.416 20:05:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:19.416 20:05:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 70942 ']' 00:10:19.416 20:05:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:19.416 20:05:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:19.416 20:05:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:19.416 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:19.416 20:05:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:19.416 20:05:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.416 [2024-12-08 20:05:51.255863] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:10:19.416 [2024-12-08 20:05:51.256086] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70942 ] 00:10:19.674 [2024-12-08 20:05:51.427848] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:19.674 [2024-12-08 20:05:51.537775] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:19.933 [2024-12-08 20:05:51.743486] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:19.933 [2024-12-08 20:05:51.743542] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:20.192 20:05:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:20.192 20:05:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:20.192 20:05:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:20.192 20:05:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:20.192 20:05:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.192 20:05:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.192 BaseBdev1_malloc 00:10:20.192 20:05:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.192 20:05:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:20.192 20:05:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.192 20:05:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.192 true 00:10:20.192 20:05:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.192 20:05:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:20.192 20:05:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.192 20:05:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.192 [2024-12-08 20:05:52.138588] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:20.192 [2024-12-08 20:05:52.138647] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:20.192 [2024-12-08 20:05:52.138667] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:20.192 [2024-12-08 20:05:52.138678] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:20.192 [2024-12-08 20:05:52.141041] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:20.192 [2024-12-08 20:05:52.141087] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:20.192 BaseBdev1 00:10:20.192 20:05:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.192 20:05:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:20.192 20:05:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:20.192 20:05:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.192 20:05:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.454 BaseBdev2_malloc 00:10:20.454 20:05:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.454 20:05:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:20.454 20:05:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.454 20:05:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.454 true 00:10:20.454 20:05:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.454 20:05:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:20.454 20:05:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.454 20:05:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.454 [2024-12-08 20:05:52.206449] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:20.454 [2024-12-08 20:05:52.206506] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:20.454 [2024-12-08 20:05:52.206523] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:20.454 [2024-12-08 20:05:52.206534] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:20.454 [2024-12-08 20:05:52.208916] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:20.454 [2024-12-08 20:05:52.208974] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:20.454 BaseBdev2 00:10:20.454 20:05:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.454 20:05:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:20.455 20:05:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:20.455 20:05:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.455 20:05:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.455 BaseBdev3_malloc 00:10:20.455 20:05:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.455 20:05:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:20.455 20:05:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.455 20:05:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.455 true 00:10:20.455 20:05:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.455 20:05:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:20.455 20:05:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.455 20:05:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.455 [2024-12-08 20:05:52.294490] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:20.455 [2024-12-08 20:05:52.294594] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:20.455 [2024-12-08 20:05:52.294637] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:20.455 [2024-12-08 20:05:52.294672] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:20.455 [2024-12-08 20:05:52.297210] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:20.455 [2024-12-08 20:05:52.297289] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:20.455 BaseBdev3 00:10:20.455 20:05:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.455 20:05:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:20.455 20:05:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:10:20.455 20:05:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.455 20:05:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.455 BaseBdev4_malloc 00:10:20.455 20:05:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.455 20:05:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:10:20.455 20:05:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.455 20:05:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.455 true 00:10:20.455 20:05:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.455 20:05:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:10:20.455 20:05:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.455 20:05:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.455 [2024-12-08 20:05:52.361579] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:10:20.455 [2024-12-08 20:05:52.361675] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:20.455 [2024-12-08 20:05:52.361727] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:20.455 [2024-12-08 20:05:52.361758] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:20.455 [2024-12-08 20:05:52.363930] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:20.455 [2024-12-08 20:05:52.364022] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:10:20.455 BaseBdev4 00:10:20.455 20:05:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.455 20:05:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:10:20.455 20:05:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.455 20:05:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.455 [2024-12-08 20:05:52.373647] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:20.455 [2024-12-08 20:05:52.375735] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:20.455 [2024-12-08 20:05:52.375869] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:20.455 [2024-12-08 20:05:52.375995] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:20.455 [2024-12-08 20:05:52.376295] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:10:20.455 [2024-12-08 20:05:52.376357] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:20.455 [2024-12-08 20:05:52.376676] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:10:20.455 [2024-12-08 20:05:52.376855] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:10:20.455 [2024-12-08 20:05:52.376868] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:10:20.455 [2024-12-08 20:05:52.377052] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:20.455 20:05:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.455 20:05:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:20.455 20:05:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:20.455 20:05:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:20.455 20:05:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:20.455 20:05:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:20.455 20:05:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:20.455 20:05:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:20.455 20:05:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:20.455 20:05:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:20.455 20:05:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:20.455 20:05:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.455 20:05:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:20.455 20:05:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.455 20:05:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.455 20:05:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.713 20:05:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:20.713 "name": "raid_bdev1", 00:10:20.713 "uuid": "2bd1a656-e78a-4e06-8133-e42de4a8a137", 00:10:20.713 "strip_size_kb": 64, 00:10:20.713 "state": "online", 00:10:20.713 "raid_level": "raid0", 00:10:20.713 "superblock": true, 00:10:20.713 "num_base_bdevs": 4, 00:10:20.713 "num_base_bdevs_discovered": 4, 00:10:20.713 "num_base_bdevs_operational": 4, 00:10:20.713 "base_bdevs_list": [ 00:10:20.713 { 00:10:20.713 "name": "BaseBdev1", 00:10:20.713 "uuid": "ab8a6dea-09e2-55b4-9856-e4eb43607059", 00:10:20.713 "is_configured": true, 00:10:20.713 "data_offset": 2048, 00:10:20.713 "data_size": 63488 00:10:20.713 }, 00:10:20.713 { 00:10:20.713 "name": "BaseBdev2", 00:10:20.713 "uuid": "bbdcf34e-091a-5ba3-976b-0d1ef8b15004", 00:10:20.713 "is_configured": true, 00:10:20.713 "data_offset": 2048, 00:10:20.713 "data_size": 63488 00:10:20.713 }, 00:10:20.713 { 00:10:20.713 "name": "BaseBdev3", 00:10:20.713 "uuid": "7a2a95df-4d9e-5dbb-8a8b-1734307e0d36", 00:10:20.713 "is_configured": true, 00:10:20.713 "data_offset": 2048, 00:10:20.713 "data_size": 63488 00:10:20.713 }, 00:10:20.713 { 00:10:20.713 "name": "BaseBdev4", 00:10:20.713 "uuid": "38d0d59a-4414-547a-91f8-f0ddccf9d27e", 00:10:20.713 "is_configured": true, 00:10:20.713 "data_offset": 2048, 00:10:20.713 "data_size": 63488 00:10:20.713 } 00:10:20.713 ] 00:10:20.713 }' 00:10:20.713 20:05:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:20.713 20:05:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.971 20:05:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:20.971 20:05:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:20.971 [2024-12-08 20:05:52.885824] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:10:21.910 20:05:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:21.910 20:05:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.910 20:05:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.910 20:05:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.910 20:05:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:21.910 20:05:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:10:21.910 20:05:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:10:21.910 20:05:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:21.910 20:05:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:21.910 20:05:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:21.910 20:05:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:21.910 20:05:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:21.910 20:05:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:21.910 20:05:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:21.910 20:05:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:21.910 20:05:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:21.910 20:05:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:21.910 20:05:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:21.910 20:05:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.910 20:05:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.910 20:05:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:21.910 20:05:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.910 20:05:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:21.910 "name": "raid_bdev1", 00:10:21.910 "uuid": "2bd1a656-e78a-4e06-8133-e42de4a8a137", 00:10:21.910 "strip_size_kb": 64, 00:10:21.910 "state": "online", 00:10:21.910 "raid_level": "raid0", 00:10:21.910 "superblock": true, 00:10:21.910 "num_base_bdevs": 4, 00:10:21.910 "num_base_bdevs_discovered": 4, 00:10:21.910 "num_base_bdevs_operational": 4, 00:10:21.910 "base_bdevs_list": [ 00:10:21.910 { 00:10:21.910 "name": "BaseBdev1", 00:10:21.910 "uuid": "ab8a6dea-09e2-55b4-9856-e4eb43607059", 00:10:21.910 "is_configured": true, 00:10:21.910 "data_offset": 2048, 00:10:21.910 "data_size": 63488 00:10:21.910 }, 00:10:21.910 { 00:10:21.910 "name": "BaseBdev2", 00:10:21.910 "uuid": "bbdcf34e-091a-5ba3-976b-0d1ef8b15004", 00:10:21.910 "is_configured": true, 00:10:21.910 "data_offset": 2048, 00:10:21.910 "data_size": 63488 00:10:21.910 }, 00:10:21.910 { 00:10:21.910 "name": "BaseBdev3", 00:10:21.910 "uuid": "7a2a95df-4d9e-5dbb-8a8b-1734307e0d36", 00:10:21.910 "is_configured": true, 00:10:21.910 "data_offset": 2048, 00:10:21.910 "data_size": 63488 00:10:21.910 }, 00:10:21.910 { 00:10:21.910 "name": "BaseBdev4", 00:10:21.910 "uuid": "38d0d59a-4414-547a-91f8-f0ddccf9d27e", 00:10:21.910 "is_configured": true, 00:10:21.910 "data_offset": 2048, 00:10:21.910 "data_size": 63488 00:10:21.910 } 00:10:21.910 ] 00:10:21.910 }' 00:10:21.910 20:05:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:21.910 20:05:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.478 20:05:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:22.478 20:05:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.478 20:05:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.478 [2024-12-08 20:05:54.233747] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:22.478 [2024-12-08 20:05:54.233783] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:22.478 [2024-12-08 20:05:54.236584] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:22.478 [2024-12-08 20:05:54.236645] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:22.478 [2024-12-08 20:05:54.236691] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:22.478 [2024-12-08 20:05:54.236702] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:10:22.478 { 00:10:22.478 "results": [ 00:10:22.478 { 00:10:22.478 "job": "raid_bdev1", 00:10:22.478 "core_mask": "0x1", 00:10:22.478 "workload": "randrw", 00:10:22.478 "percentage": 50, 00:10:22.478 "status": "finished", 00:10:22.478 "queue_depth": 1, 00:10:22.478 "io_size": 131072, 00:10:22.478 "runtime": 1.34882, 00:10:22.478 "iops": 15069.468127696802, 00:10:22.478 "mibps": 1883.6835159621003, 00:10:22.478 "io_failed": 1, 00:10:22.478 "io_timeout": 0, 00:10:22.478 "avg_latency_us": 92.17817728179205, 00:10:22.478 "min_latency_us": 27.053275109170304, 00:10:22.478 "max_latency_us": 1473.844541484716 00:10:22.478 } 00:10:22.478 ], 00:10:22.478 "core_count": 1 00:10:22.478 } 00:10:22.478 20:05:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.478 20:05:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 70942 00:10:22.478 20:05:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 70942 ']' 00:10:22.478 20:05:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 70942 00:10:22.478 20:05:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:10:22.478 20:05:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:22.478 20:05:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70942 00:10:22.478 killing process with pid 70942 00:10:22.478 20:05:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:22.479 20:05:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:22.479 20:05:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70942' 00:10:22.479 20:05:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 70942 00:10:22.479 [2024-12-08 20:05:54.278623] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:22.479 20:05:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 70942 00:10:22.750 [2024-12-08 20:05:54.610723] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:24.124 20:05:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.tUYk8LZi3X 00:10:24.124 20:05:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:24.124 20:05:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:24.124 20:05:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:10:24.124 20:05:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:10:24.124 ************************************ 00:10:24.124 END TEST raid_write_error_test 00:10:24.124 ************************************ 00:10:24.124 20:05:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:24.124 20:05:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:24.124 20:05:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:10:24.124 00:10:24.124 real 0m4.651s 00:10:24.124 user 0m5.420s 00:10:24.124 sys 0m0.580s 00:10:24.124 20:05:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:24.124 20:05:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.124 20:05:55 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:10:24.124 20:05:55 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:10:24.124 20:05:55 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:24.124 20:05:55 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:24.124 20:05:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:24.124 ************************************ 00:10:24.124 START TEST raid_state_function_test 00:10:24.124 ************************************ 00:10:24.124 20:05:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 4 false 00:10:24.124 20:05:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:10:24.124 20:05:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:24.124 20:05:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:10:24.124 20:05:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:24.124 20:05:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:24.124 20:05:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:24.124 20:05:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:24.124 20:05:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:24.124 20:05:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:24.124 20:05:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:24.124 20:05:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:24.124 20:05:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:24.124 20:05:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:24.124 20:05:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:24.124 20:05:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:24.124 20:05:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:24.124 20:05:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:24.124 20:05:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:24.124 20:05:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:24.124 20:05:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:24.124 20:05:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:24.124 20:05:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:24.124 20:05:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:24.124 20:05:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:24.124 20:05:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:10:24.124 20:05:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:24.124 20:05:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:24.124 20:05:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:10:24.124 20:05:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:10:24.124 20:05:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=71081 00:10:24.124 Process raid pid: 71081 00:10:24.124 20:05:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:24.124 20:05:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 71081' 00:10:24.124 20:05:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 71081 00:10:24.124 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:24.124 20:05:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 71081 ']' 00:10:24.124 20:05:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:24.124 20:05:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:24.124 20:05:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:24.125 20:05:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:24.125 20:05:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.125 [2024-12-08 20:05:55.969579] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:10:24.125 [2024-12-08 20:05:55.969696] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:24.384 [2024-12-08 20:05:56.145228] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:24.384 [2024-12-08 20:05:56.262148] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:24.643 [2024-12-08 20:05:56.468312] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:24.643 [2024-12-08 20:05:56.468370] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:24.942 20:05:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:24.942 20:05:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:10:24.942 20:05:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:24.943 20:05:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.943 20:05:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.943 [2024-12-08 20:05:56.787251] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:24.943 [2024-12-08 20:05:56.787316] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:24.943 [2024-12-08 20:05:56.787328] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:24.943 [2024-12-08 20:05:56.787339] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:24.943 [2024-12-08 20:05:56.787345] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:24.943 [2024-12-08 20:05:56.787366] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:24.943 [2024-12-08 20:05:56.787372] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:24.943 [2024-12-08 20:05:56.787380] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:24.943 20:05:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.943 20:05:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:24.943 20:05:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:24.943 20:05:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:24.943 20:05:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:24.943 20:05:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:24.943 20:05:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:24.943 20:05:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:24.943 20:05:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:24.943 20:05:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:24.943 20:05:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:24.943 20:05:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:24.943 20:05:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:24.943 20:05:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.943 20:05:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.943 20:05:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.943 20:05:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:24.943 "name": "Existed_Raid", 00:10:24.943 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:24.943 "strip_size_kb": 64, 00:10:24.943 "state": "configuring", 00:10:24.943 "raid_level": "concat", 00:10:24.943 "superblock": false, 00:10:24.943 "num_base_bdevs": 4, 00:10:24.943 "num_base_bdevs_discovered": 0, 00:10:24.943 "num_base_bdevs_operational": 4, 00:10:24.943 "base_bdevs_list": [ 00:10:24.943 { 00:10:24.943 "name": "BaseBdev1", 00:10:24.943 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:24.943 "is_configured": false, 00:10:24.943 "data_offset": 0, 00:10:24.943 "data_size": 0 00:10:24.943 }, 00:10:24.943 { 00:10:24.943 "name": "BaseBdev2", 00:10:24.943 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:24.943 "is_configured": false, 00:10:24.943 "data_offset": 0, 00:10:24.943 "data_size": 0 00:10:24.943 }, 00:10:24.943 { 00:10:24.943 "name": "BaseBdev3", 00:10:24.943 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:24.943 "is_configured": false, 00:10:24.943 "data_offset": 0, 00:10:24.943 "data_size": 0 00:10:24.943 }, 00:10:24.943 { 00:10:24.943 "name": "BaseBdev4", 00:10:24.943 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:24.943 "is_configured": false, 00:10:24.943 "data_offset": 0, 00:10:24.943 "data_size": 0 00:10:24.943 } 00:10:24.943 ] 00:10:24.943 }' 00:10:24.943 20:05:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:24.943 20:05:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.510 20:05:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:25.510 20:05:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.511 20:05:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.511 [2024-12-08 20:05:57.198450] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:25.511 [2024-12-08 20:05:57.198488] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:25.511 20:05:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.511 20:05:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:25.511 20:05:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.511 20:05:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.511 [2024-12-08 20:05:57.206440] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:25.511 [2024-12-08 20:05:57.206486] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:25.511 [2024-12-08 20:05:57.206495] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:25.511 [2024-12-08 20:05:57.206505] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:25.511 [2024-12-08 20:05:57.206511] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:25.511 [2024-12-08 20:05:57.206520] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:25.511 [2024-12-08 20:05:57.206527] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:25.511 [2024-12-08 20:05:57.206535] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:25.511 20:05:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.511 20:05:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:25.511 20:05:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.511 20:05:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.511 [2024-12-08 20:05:57.249892] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:25.511 BaseBdev1 00:10:25.511 20:05:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.511 20:05:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:25.511 20:05:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:25.511 20:05:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:25.511 20:05:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:25.511 20:05:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:25.511 20:05:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:25.511 20:05:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:25.511 20:05:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.511 20:05:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.511 20:05:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.511 20:05:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:25.511 20:05:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.511 20:05:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.511 [ 00:10:25.511 { 00:10:25.511 "name": "BaseBdev1", 00:10:25.511 "aliases": [ 00:10:25.511 "6459062c-ff81-4353-9af9-102bd8f2f9c9" 00:10:25.511 ], 00:10:25.511 "product_name": "Malloc disk", 00:10:25.511 "block_size": 512, 00:10:25.511 "num_blocks": 65536, 00:10:25.511 "uuid": "6459062c-ff81-4353-9af9-102bd8f2f9c9", 00:10:25.511 "assigned_rate_limits": { 00:10:25.511 "rw_ios_per_sec": 0, 00:10:25.511 "rw_mbytes_per_sec": 0, 00:10:25.511 "r_mbytes_per_sec": 0, 00:10:25.511 "w_mbytes_per_sec": 0 00:10:25.511 }, 00:10:25.511 "claimed": true, 00:10:25.511 "claim_type": "exclusive_write", 00:10:25.511 "zoned": false, 00:10:25.511 "supported_io_types": { 00:10:25.511 "read": true, 00:10:25.511 "write": true, 00:10:25.511 "unmap": true, 00:10:25.511 "flush": true, 00:10:25.511 "reset": true, 00:10:25.511 "nvme_admin": false, 00:10:25.511 "nvme_io": false, 00:10:25.511 "nvme_io_md": false, 00:10:25.511 "write_zeroes": true, 00:10:25.511 "zcopy": true, 00:10:25.511 "get_zone_info": false, 00:10:25.511 "zone_management": false, 00:10:25.511 "zone_append": false, 00:10:25.511 "compare": false, 00:10:25.511 "compare_and_write": false, 00:10:25.511 "abort": true, 00:10:25.511 "seek_hole": false, 00:10:25.511 "seek_data": false, 00:10:25.511 "copy": true, 00:10:25.511 "nvme_iov_md": false 00:10:25.511 }, 00:10:25.511 "memory_domains": [ 00:10:25.511 { 00:10:25.511 "dma_device_id": "system", 00:10:25.511 "dma_device_type": 1 00:10:25.511 }, 00:10:25.511 { 00:10:25.511 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:25.511 "dma_device_type": 2 00:10:25.511 } 00:10:25.511 ], 00:10:25.511 "driver_specific": {} 00:10:25.511 } 00:10:25.511 ] 00:10:25.511 20:05:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.511 20:05:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:25.511 20:05:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:25.511 20:05:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:25.511 20:05:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:25.511 20:05:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:25.511 20:05:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:25.511 20:05:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:25.511 20:05:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:25.511 20:05:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:25.511 20:05:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:25.511 20:05:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:25.511 20:05:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:25.511 20:05:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:25.511 20:05:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.511 20:05:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.511 20:05:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.511 20:05:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:25.511 "name": "Existed_Raid", 00:10:25.511 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:25.511 "strip_size_kb": 64, 00:10:25.511 "state": "configuring", 00:10:25.511 "raid_level": "concat", 00:10:25.511 "superblock": false, 00:10:25.511 "num_base_bdevs": 4, 00:10:25.511 "num_base_bdevs_discovered": 1, 00:10:25.511 "num_base_bdevs_operational": 4, 00:10:25.511 "base_bdevs_list": [ 00:10:25.511 { 00:10:25.511 "name": "BaseBdev1", 00:10:25.511 "uuid": "6459062c-ff81-4353-9af9-102bd8f2f9c9", 00:10:25.511 "is_configured": true, 00:10:25.511 "data_offset": 0, 00:10:25.511 "data_size": 65536 00:10:25.511 }, 00:10:25.511 { 00:10:25.511 "name": "BaseBdev2", 00:10:25.511 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:25.511 "is_configured": false, 00:10:25.511 "data_offset": 0, 00:10:25.511 "data_size": 0 00:10:25.511 }, 00:10:25.511 { 00:10:25.511 "name": "BaseBdev3", 00:10:25.511 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:25.511 "is_configured": false, 00:10:25.511 "data_offset": 0, 00:10:25.511 "data_size": 0 00:10:25.511 }, 00:10:25.511 { 00:10:25.511 "name": "BaseBdev4", 00:10:25.511 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:25.511 "is_configured": false, 00:10:25.511 "data_offset": 0, 00:10:25.511 "data_size": 0 00:10:25.511 } 00:10:25.511 ] 00:10:25.511 }' 00:10:25.511 20:05:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:25.511 20:05:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.770 20:05:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:25.770 20:05:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.770 20:05:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.770 [2024-12-08 20:05:57.717133] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:25.770 [2024-12-08 20:05:57.717237] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:25.770 20:05:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.770 20:05:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:25.770 20:05:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.770 20:05:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.770 [2024-12-08 20:05:57.729193] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:25.770 [2024-12-08 20:05:57.731286] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:25.770 [2024-12-08 20:05:57.731376] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:25.770 [2024-12-08 20:05:57.731429] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:25.770 [2024-12-08 20:05:57.731475] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:25.770 [2024-12-08 20:05:57.731512] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:25.770 [2024-12-08 20:05:57.731555] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:25.770 20:05:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.770 20:05:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:25.770 20:05:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:25.770 20:05:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:25.770 20:05:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:25.770 20:05:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:25.770 20:05:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:25.770 20:05:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:25.770 20:05:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:25.770 20:05:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:25.770 20:05:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:25.770 20:05:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:25.770 20:05:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:25.770 20:05:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:25.770 20:05:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:25.770 20:05:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.770 20:05:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.029 20:05:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.029 20:05:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:26.029 "name": "Existed_Raid", 00:10:26.029 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:26.029 "strip_size_kb": 64, 00:10:26.029 "state": "configuring", 00:10:26.029 "raid_level": "concat", 00:10:26.029 "superblock": false, 00:10:26.029 "num_base_bdevs": 4, 00:10:26.029 "num_base_bdevs_discovered": 1, 00:10:26.029 "num_base_bdevs_operational": 4, 00:10:26.029 "base_bdevs_list": [ 00:10:26.029 { 00:10:26.029 "name": "BaseBdev1", 00:10:26.029 "uuid": "6459062c-ff81-4353-9af9-102bd8f2f9c9", 00:10:26.029 "is_configured": true, 00:10:26.029 "data_offset": 0, 00:10:26.029 "data_size": 65536 00:10:26.029 }, 00:10:26.029 { 00:10:26.029 "name": "BaseBdev2", 00:10:26.029 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:26.029 "is_configured": false, 00:10:26.029 "data_offset": 0, 00:10:26.029 "data_size": 0 00:10:26.029 }, 00:10:26.029 { 00:10:26.029 "name": "BaseBdev3", 00:10:26.029 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:26.029 "is_configured": false, 00:10:26.029 "data_offset": 0, 00:10:26.029 "data_size": 0 00:10:26.029 }, 00:10:26.029 { 00:10:26.029 "name": "BaseBdev4", 00:10:26.029 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:26.029 "is_configured": false, 00:10:26.029 "data_offset": 0, 00:10:26.029 "data_size": 0 00:10:26.029 } 00:10:26.029 ] 00:10:26.029 }' 00:10:26.029 20:05:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:26.029 20:05:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.290 20:05:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:26.290 20:05:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.290 20:05:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.290 [2024-12-08 20:05:58.229569] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:26.290 BaseBdev2 00:10:26.290 20:05:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.290 20:05:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:26.290 20:05:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:26.290 20:05:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:26.290 20:05:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:26.290 20:05:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:26.290 20:05:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:26.290 20:05:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:26.290 20:05:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.290 20:05:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.290 20:05:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.290 20:05:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:26.290 20:05:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.290 20:05:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.290 [ 00:10:26.290 { 00:10:26.290 "name": "BaseBdev2", 00:10:26.290 "aliases": [ 00:10:26.290 "250d3e4c-cb06-4eaf-9e3d-7ee8e8d89f2c" 00:10:26.291 ], 00:10:26.291 "product_name": "Malloc disk", 00:10:26.291 "block_size": 512, 00:10:26.291 "num_blocks": 65536, 00:10:26.291 "uuid": "250d3e4c-cb06-4eaf-9e3d-7ee8e8d89f2c", 00:10:26.291 "assigned_rate_limits": { 00:10:26.291 "rw_ios_per_sec": 0, 00:10:26.291 "rw_mbytes_per_sec": 0, 00:10:26.291 "r_mbytes_per_sec": 0, 00:10:26.291 "w_mbytes_per_sec": 0 00:10:26.291 }, 00:10:26.291 "claimed": true, 00:10:26.291 "claim_type": "exclusive_write", 00:10:26.291 "zoned": false, 00:10:26.291 "supported_io_types": { 00:10:26.291 "read": true, 00:10:26.291 "write": true, 00:10:26.291 "unmap": true, 00:10:26.291 "flush": true, 00:10:26.291 "reset": true, 00:10:26.291 "nvme_admin": false, 00:10:26.291 "nvme_io": false, 00:10:26.291 "nvme_io_md": false, 00:10:26.291 "write_zeroes": true, 00:10:26.291 "zcopy": true, 00:10:26.291 "get_zone_info": false, 00:10:26.569 "zone_management": false, 00:10:26.569 "zone_append": false, 00:10:26.569 "compare": false, 00:10:26.569 "compare_and_write": false, 00:10:26.569 "abort": true, 00:10:26.569 "seek_hole": false, 00:10:26.569 "seek_data": false, 00:10:26.569 "copy": true, 00:10:26.569 "nvme_iov_md": false 00:10:26.569 }, 00:10:26.569 "memory_domains": [ 00:10:26.569 { 00:10:26.569 "dma_device_id": "system", 00:10:26.569 "dma_device_type": 1 00:10:26.569 }, 00:10:26.569 { 00:10:26.569 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:26.569 "dma_device_type": 2 00:10:26.569 } 00:10:26.569 ], 00:10:26.569 "driver_specific": {} 00:10:26.569 } 00:10:26.569 ] 00:10:26.569 20:05:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.569 20:05:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:26.569 20:05:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:26.569 20:05:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:26.569 20:05:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:26.569 20:05:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:26.569 20:05:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:26.569 20:05:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:26.569 20:05:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:26.569 20:05:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:26.569 20:05:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:26.569 20:05:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:26.569 20:05:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:26.569 20:05:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:26.569 20:05:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:26.569 20:05:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:26.569 20:05:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.569 20:05:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.569 20:05:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.569 20:05:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:26.569 "name": "Existed_Raid", 00:10:26.569 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:26.569 "strip_size_kb": 64, 00:10:26.569 "state": "configuring", 00:10:26.569 "raid_level": "concat", 00:10:26.569 "superblock": false, 00:10:26.569 "num_base_bdevs": 4, 00:10:26.569 "num_base_bdevs_discovered": 2, 00:10:26.569 "num_base_bdevs_operational": 4, 00:10:26.569 "base_bdevs_list": [ 00:10:26.569 { 00:10:26.569 "name": "BaseBdev1", 00:10:26.569 "uuid": "6459062c-ff81-4353-9af9-102bd8f2f9c9", 00:10:26.569 "is_configured": true, 00:10:26.569 "data_offset": 0, 00:10:26.569 "data_size": 65536 00:10:26.569 }, 00:10:26.569 { 00:10:26.569 "name": "BaseBdev2", 00:10:26.569 "uuid": "250d3e4c-cb06-4eaf-9e3d-7ee8e8d89f2c", 00:10:26.569 "is_configured": true, 00:10:26.569 "data_offset": 0, 00:10:26.569 "data_size": 65536 00:10:26.569 }, 00:10:26.569 { 00:10:26.569 "name": "BaseBdev3", 00:10:26.569 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:26.569 "is_configured": false, 00:10:26.569 "data_offset": 0, 00:10:26.569 "data_size": 0 00:10:26.569 }, 00:10:26.569 { 00:10:26.569 "name": "BaseBdev4", 00:10:26.569 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:26.569 "is_configured": false, 00:10:26.569 "data_offset": 0, 00:10:26.569 "data_size": 0 00:10:26.569 } 00:10:26.569 ] 00:10:26.569 }' 00:10:26.569 20:05:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:26.569 20:05:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.836 20:05:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:26.836 20:05:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.836 20:05:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.836 [2024-12-08 20:05:58.772517] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:26.836 BaseBdev3 00:10:26.836 20:05:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.836 20:05:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:26.836 20:05:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:26.836 20:05:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:26.836 20:05:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:26.836 20:05:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:26.836 20:05:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:26.836 20:05:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:26.836 20:05:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.836 20:05:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.836 20:05:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.836 20:05:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:26.836 20:05:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.836 20:05:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.836 [ 00:10:26.836 { 00:10:26.836 "name": "BaseBdev3", 00:10:26.836 "aliases": [ 00:10:26.836 "ae284a4f-9741-4437-a3a9-a5067a0fdf1f" 00:10:26.836 ], 00:10:26.836 "product_name": "Malloc disk", 00:10:26.836 "block_size": 512, 00:10:26.836 "num_blocks": 65536, 00:10:26.836 "uuid": "ae284a4f-9741-4437-a3a9-a5067a0fdf1f", 00:10:26.836 "assigned_rate_limits": { 00:10:26.836 "rw_ios_per_sec": 0, 00:10:26.836 "rw_mbytes_per_sec": 0, 00:10:26.836 "r_mbytes_per_sec": 0, 00:10:26.836 "w_mbytes_per_sec": 0 00:10:26.836 }, 00:10:26.836 "claimed": true, 00:10:26.836 "claim_type": "exclusive_write", 00:10:26.836 "zoned": false, 00:10:26.836 "supported_io_types": { 00:10:26.836 "read": true, 00:10:26.836 "write": true, 00:10:26.836 "unmap": true, 00:10:26.836 "flush": true, 00:10:26.836 "reset": true, 00:10:26.836 "nvme_admin": false, 00:10:26.836 "nvme_io": false, 00:10:27.096 "nvme_io_md": false, 00:10:27.096 "write_zeroes": true, 00:10:27.096 "zcopy": true, 00:10:27.096 "get_zone_info": false, 00:10:27.096 "zone_management": false, 00:10:27.096 "zone_append": false, 00:10:27.096 "compare": false, 00:10:27.096 "compare_and_write": false, 00:10:27.096 "abort": true, 00:10:27.096 "seek_hole": false, 00:10:27.096 "seek_data": false, 00:10:27.096 "copy": true, 00:10:27.096 "nvme_iov_md": false 00:10:27.096 }, 00:10:27.096 "memory_domains": [ 00:10:27.096 { 00:10:27.096 "dma_device_id": "system", 00:10:27.096 "dma_device_type": 1 00:10:27.096 }, 00:10:27.096 { 00:10:27.096 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:27.096 "dma_device_type": 2 00:10:27.096 } 00:10:27.096 ], 00:10:27.096 "driver_specific": {} 00:10:27.096 } 00:10:27.096 ] 00:10:27.096 20:05:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.096 20:05:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:27.096 20:05:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:27.096 20:05:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:27.096 20:05:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:27.096 20:05:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:27.096 20:05:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:27.096 20:05:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:27.096 20:05:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:27.096 20:05:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:27.096 20:05:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:27.096 20:05:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:27.096 20:05:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:27.096 20:05:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:27.096 20:05:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:27.096 20:05:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:27.096 20:05:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.096 20:05:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.096 20:05:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.096 20:05:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:27.096 "name": "Existed_Raid", 00:10:27.096 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:27.096 "strip_size_kb": 64, 00:10:27.096 "state": "configuring", 00:10:27.096 "raid_level": "concat", 00:10:27.096 "superblock": false, 00:10:27.096 "num_base_bdevs": 4, 00:10:27.096 "num_base_bdevs_discovered": 3, 00:10:27.096 "num_base_bdevs_operational": 4, 00:10:27.096 "base_bdevs_list": [ 00:10:27.096 { 00:10:27.096 "name": "BaseBdev1", 00:10:27.096 "uuid": "6459062c-ff81-4353-9af9-102bd8f2f9c9", 00:10:27.096 "is_configured": true, 00:10:27.096 "data_offset": 0, 00:10:27.096 "data_size": 65536 00:10:27.096 }, 00:10:27.096 { 00:10:27.096 "name": "BaseBdev2", 00:10:27.096 "uuid": "250d3e4c-cb06-4eaf-9e3d-7ee8e8d89f2c", 00:10:27.096 "is_configured": true, 00:10:27.096 "data_offset": 0, 00:10:27.096 "data_size": 65536 00:10:27.096 }, 00:10:27.096 { 00:10:27.096 "name": "BaseBdev3", 00:10:27.096 "uuid": "ae284a4f-9741-4437-a3a9-a5067a0fdf1f", 00:10:27.096 "is_configured": true, 00:10:27.096 "data_offset": 0, 00:10:27.096 "data_size": 65536 00:10:27.096 }, 00:10:27.096 { 00:10:27.096 "name": "BaseBdev4", 00:10:27.096 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:27.096 "is_configured": false, 00:10:27.096 "data_offset": 0, 00:10:27.096 "data_size": 0 00:10:27.096 } 00:10:27.096 ] 00:10:27.096 }' 00:10:27.096 20:05:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:27.096 20:05:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.355 20:05:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:27.355 20:05:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.355 20:05:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.355 [2024-12-08 20:05:59.302193] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:27.355 [2024-12-08 20:05:59.302321] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:27.355 [2024-12-08 20:05:59.302347] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:10:27.355 [2024-12-08 20:05:59.302705] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:27.355 [2024-12-08 20:05:59.302919] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:27.355 [2024-12-08 20:05:59.302983] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:27.355 [2024-12-08 20:05:59.303366] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:27.355 BaseBdev4 00:10:27.355 20:05:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.355 20:05:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:27.355 20:05:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:27.355 20:05:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:27.355 20:05:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:27.355 20:05:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:27.355 20:05:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:27.355 20:05:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:27.355 20:05:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.355 20:05:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.355 20:05:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.355 20:05:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:27.355 20:05:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.355 20:05:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.614 [ 00:10:27.614 { 00:10:27.614 "name": "BaseBdev4", 00:10:27.614 "aliases": [ 00:10:27.614 "038112e2-4585-4257-ae44-58427a0b9028" 00:10:27.614 ], 00:10:27.614 "product_name": "Malloc disk", 00:10:27.614 "block_size": 512, 00:10:27.614 "num_blocks": 65536, 00:10:27.614 "uuid": "038112e2-4585-4257-ae44-58427a0b9028", 00:10:27.614 "assigned_rate_limits": { 00:10:27.614 "rw_ios_per_sec": 0, 00:10:27.614 "rw_mbytes_per_sec": 0, 00:10:27.614 "r_mbytes_per_sec": 0, 00:10:27.614 "w_mbytes_per_sec": 0 00:10:27.614 }, 00:10:27.614 "claimed": true, 00:10:27.614 "claim_type": "exclusive_write", 00:10:27.614 "zoned": false, 00:10:27.614 "supported_io_types": { 00:10:27.614 "read": true, 00:10:27.614 "write": true, 00:10:27.614 "unmap": true, 00:10:27.614 "flush": true, 00:10:27.614 "reset": true, 00:10:27.614 "nvme_admin": false, 00:10:27.614 "nvme_io": false, 00:10:27.614 "nvme_io_md": false, 00:10:27.614 "write_zeroes": true, 00:10:27.614 "zcopy": true, 00:10:27.614 "get_zone_info": false, 00:10:27.614 "zone_management": false, 00:10:27.614 "zone_append": false, 00:10:27.614 "compare": false, 00:10:27.614 "compare_and_write": false, 00:10:27.614 "abort": true, 00:10:27.614 "seek_hole": false, 00:10:27.614 "seek_data": false, 00:10:27.614 "copy": true, 00:10:27.614 "nvme_iov_md": false 00:10:27.614 }, 00:10:27.614 "memory_domains": [ 00:10:27.614 { 00:10:27.614 "dma_device_id": "system", 00:10:27.614 "dma_device_type": 1 00:10:27.614 }, 00:10:27.614 { 00:10:27.614 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:27.614 "dma_device_type": 2 00:10:27.614 } 00:10:27.614 ], 00:10:27.614 "driver_specific": {} 00:10:27.614 } 00:10:27.614 ] 00:10:27.614 20:05:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.614 20:05:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:27.614 20:05:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:27.614 20:05:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:27.614 20:05:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:10:27.614 20:05:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:27.614 20:05:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:27.614 20:05:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:27.614 20:05:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:27.614 20:05:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:27.614 20:05:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:27.614 20:05:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:27.614 20:05:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:27.614 20:05:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:27.614 20:05:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:27.614 20:05:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:27.614 20:05:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.614 20:05:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.614 20:05:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.615 20:05:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:27.615 "name": "Existed_Raid", 00:10:27.615 "uuid": "7292ad29-c1ab-4acc-9007-615c78df8e2c", 00:10:27.615 "strip_size_kb": 64, 00:10:27.615 "state": "online", 00:10:27.615 "raid_level": "concat", 00:10:27.615 "superblock": false, 00:10:27.615 "num_base_bdevs": 4, 00:10:27.615 "num_base_bdevs_discovered": 4, 00:10:27.615 "num_base_bdevs_operational": 4, 00:10:27.615 "base_bdevs_list": [ 00:10:27.615 { 00:10:27.615 "name": "BaseBdev1", 00:10:27.615 "uuid": "6459062c-ff81-4353-9af9-102bd8f2f9c9", 00:10:27.615 "is_configured": true, 00:10:27.615 "data_offset": 0, 00:10:27.615 "data_size": 65536 00:10:27.615 }, 00:10:27.615 { 00:10:27.615 "name": "BaseBdev2", 00:10:27.615 "uuid": "250d3e4c-cb06-4eaf-9e3d-7ee8e8d89f2c", 00:10:27.615 "is_configured": true, 00:10:27.615 "data_offset": 0, 00:10:27.615 "data_size": 65536 00:10:27.615 }, 00:10:27.615 { 00:10:27.615 "name": "BaseBdev3", 00:10:27.615 "uuid": "ae284a4f-9741-4437-a3a9-a5067a0fdf1f", 00:10:27.615 "is_configured": true, 00:10:27.615 "data_offset": 0, 00:10:27.615 "data_size": 65536 00:10:27.615 }, 00:10:27.615 { 00:10:27.615 "name": "BaseBdev4", 00:10:27.615 "uuid": "038112e2-4585-4257-ae44-58427a0b9028", 00:10:27.615 "is_configured": true, 00:10:27.615 "data_offset": 0, 00:10:27.615 "data_size": 65536 00:10:27.615 } 00:10:27.615 ] 00:10:27.615 }' 00:10:27.615 20:05:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:27.615 20:05:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.875 20:05:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:27.875 20:05:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:27.875 20:05:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:27.875 20:05:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:27.875 20:05:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:27.875 20:05:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:27.875 20:05:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:27.875 20:05:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:27.875 20:05:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.875 20:05:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.875 [2024-12-08 20:05:59.765808] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:27.875 20:05:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.875 20:05:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:27.875 "name": "Existed_Raid", 00:10:27.875 "aliases": [ 00:10:27.875 "7292ad29-c1ab-4acc-9007-615c78df8e2c" 00:10:27.875 ], 00:10:27.875 "product_name": "Raid Volume", 00:10:27.875 "block_size": 512, 00:10:27.875 "num_blocks": 262144, 00:10:27.875 "uuid": "7292ad29-c1ab-4acc-9007-615c78df8e2c", 00:10:27.875 "assigned_rate_limits": { 00:10:27.875 "rw_ios_per_sec": 0, 00:10:27.875 "rw_mbytes_per_sec": 0, 00:10:27.875 "r_mbytes_per_sec": 0, 00:10:27.875 "w_mbytes_per_sec": 0 00:10:27.875 }, 00:10:27.875 "claimed": false, 00:10:27.875 "zoned": false, 00:10:27.875 "supported_io_types": { 00:10:27.875 "read": true, 00:10:27.875 "write": true, 00:10:27.875 "unmap": true, 00:10:27.875 "flush": true, 00:10:27.875 "reset": true, 00:10:27.875 "nvme_admin": false, 00:10:27.875 "nvme_io": false, 00:10:27.875 "nvme_io_md": false, 00:10:27.875 "write_zeroes": true, 00:10:27.875 "zcopy": false, 00:10:27.875 "get_zone_info": false, 00:10:27.875 "zone_management": false, 00:10:27.875 "zone_append": false, 00:10:27.875 "compare": false, 00:10:27.875 "compare_and_write": false, 00:10:27.875 "abort": false, 00:10:27.875 "seek_hole": false, 00:10:27.875 "seek_data": false, 00:10:27.875 "copy": false, 00:10:27.875 "nvme_iov_md": false 00:10:27.875 }, 00:10:27.875 "memory_domains": [ 00:10:27.875 { 00:10:27.875 "dma_device_id": "system", 00:10:27.875 "dma_device_type": 1 00:10:27.875 }, 00:10:27.875 { 00:10:27.875 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:27.875 "dma_device_type": 2 00:10:27.875 }, 00:10:27.875 { 00:10:27.875 "dma_device_id": "system", 00:10:27.875 "dma_device_type": 1 00:10:27.875 }, 00:10:27.875 { 00:10:27.875 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:27.875 "dma_device_type": 2 00:10:27.875 }, 00:10:27.875 { 00:10:27.875 "dma_device_id": "system", 00:10:27.875 "dma_device_type": 1 00:10:27.875 }, 00:10:27.875 { 00:10:27.875 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:27.875 "dma_device_type": 2 00:10:27.875 }, 00:10:27.875 { 00:10:27.875 "dma_device_id": "system", 00:10:27.875 "dma_device_type": 1 00:10:27.875 }, 00:10:27.875 { 00:10:27.875 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:27.875 "dma_device_type": 2 00:10:27.875 } 00:10:27.875 ], 00:10:27.875 "driver_specific": { 00:10:27.875 "raid": { 00:10:27.875 "uuid": "7292ad29-c1ab-4acc-9007-615c78df8e2c", 00:10:27.875 "strip_size_kb": 64, 00:10:27.875 "state": "online", 00:10:27.875 "raid_level": "concat", 00:10:27.875 "superblock": false, 00:10:27.875 "num_base_bdevs": 4, 00:10:27.875 "num_base_bdevs_discovered": 4, 00:10:27.875 "num_base_bdevs_operational": 4, 00:10:27.875 "base_bdevs_list": [ 00:10:27.875 { 00:10:27.875 "name": "BaseBdev1", 00:10:27.875 "uuid": "6459062c-ff81-4353-9af9-102bd8f2f9c9", 00:10:27.875 "is_configured": true, 00:10:27.875 "data_offset": 0, 00:10:27.875 "data_size": 65536 00:10:27.875 }, 00:10:27.875 { 00:10:27.875 "name": "BaseBdev2", 00:10:27.875 "uuid": "250d3e4c-cb06-4eaf-9e3d-7ee8e8d89f2c", 00:10:27.875 "is_configured": true, 00:10:27.875 "data_offset": 0, 00:10:27.875 "data_size": 65536 00:10:27.875 }, 00:10:27.875 { 00:10:27.875 "name": "BaseBdev3", 00:10:27.875 "uuid": "ae284a4f-9741-4437-a3a9-a5067a0fdf1f", 00:10:27.875 "is_configured": true, 00:10:27.875 "data_offset": 0, 00:10:27.875 "data_size": 65536 00:10:27.875 }, 00:10:27.875 { 00:10:27.875 "name": "BaseBdev4", 00:10:27.875 "uuid": "038112e2-4585-4257-ae44-58427a0b9028", 00:10:27.875 "is_configured": true, 00:10:27.875 "data_offset": 0, 00:10:27.875 "data_size": 65536 00:10:27.875 } 00:10:27.875 ] 00:10:27.875 } 00:10:27.875 } 00:10:27.875 }' 00:10:27.875 20:05:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:28.135 20:05:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:28.135 BaseBdev2 00:10:28.135 BaseBdev3 00:10:28.135 BaseBdev4' 00:10:28.135 20:05:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:28.135 20:05:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:28.135 20:05:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:28.135 20:05:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:28.135 20:05:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:28.135 20:05:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.135 20:05:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.135 20:05:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.135 20:05:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:28.135 20:05:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:28.135 20:05:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:28.135 20:05:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:28.135 20:05:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:28.135 20:05:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.135 20:05:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.135 20:05:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.135 20:06:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:28.135 20:06:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:28.135 20:06:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:28.135 20:06:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:28.135 20:06:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.135 20:06:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.135 20:06:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:28.135 20:06:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.135 20:06:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:28.135 20:06:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:28.135 20:06:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:28.135 20:06:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:28.135 20:06:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:28.135 20:06:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.135 20:06:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.135 20:06:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.135 20:06:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:28.135 20:06:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:28.395 20:06:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:28.395 20:06:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.395 20:06:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.395 [2024-12-08 20:06:00.116919] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:28.395 [2024-12-08 20:06:00.117007] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:28.395 [2024-12-08 20:06:00.117101] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:28.395 20:06:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.395 20:06:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:28.395 20:06:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:10:28.395 20:06:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:28.395 20:06:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:28.395 20:06:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:28.395 20:06:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:10:28.395 20:06:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:28.395 20:06:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:28.395 20:06:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:28.395 20:06:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:28.395 20:06:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:28.395 20:06:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:28.395 20:06:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:28.395 20:06:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:28.395 20:06:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:28.396 20:06:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:28.396 20:06:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:28.396 20:06:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.396 20:06:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.396 20:06:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.396 20:06:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:28.396 "name": "Existed_Raid", 00:10:28.396 "uuid": "7292ad29-c1ab-4acc-9007-615c78df8e2c", 00:10:28.396 "strip_size_kb": 64, 00:10:28.396 "state": "offline", 00:10:28.396 "raid_level": "concat", 00:10:28.396 "superblock": false, 00:10:28.396 "num_base_bdevs": 4, 00:10:28.396 "num_base_bdevs_discovered": 3, 00:10:28.396 "num_base_bdevs_operational": 3, 00:10:28.396 "base_bdevs_list": [ 00:10:28.396 { 00:10:28.396 "name": null, 00:10:28.396 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:28.396 "is_configured": false, 00:10:28.396 "data_offset": 0, 00:10:28.396 "data_size": 65536 00:10:28.396 }, 00:10:28.396 { 00:10:28.396 "name": "BaseBdev2", 00:10:28.396 "uuid": "250d3e4c-cb06-4eaf-9e3d-7ee8e8d89f2c", 00:10:28.396 "is_configured": true, 00:10:28.396 "data_offset": 0, 00:10:28.396 "data_size": 65536 00:10:28.396 }, 00:10:28.396 { 00:10:28.396 "name": "BaseBdev3", 00:10:28.396 "uuid": "ae284a4f-9741-4437-a3a9-a5067a0fdf1f", 00:10:28.396 "is_configured": true, 00:10:28.396 "data_offset": 0, 00:10:28.396 "data_size": 65536 00:10:28.396 }, 00:10:28.396 { 00:10:28.396 "name": "BaseBdev4", 00:10:28.396 "uuid": "038112e2-4585-4257-ae44-58427a0b9028", 00:10:28.396 "is_configured": true, 00:10:28.396 "data_offset": 0, 00:10:28.396 "data_size": 65536 00:10:28.396 } 00:10:28.396 ] 00:10:28.396 }' 00:10:28.396 20:06:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:28.396 20:06:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.964 20:06:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:28.964 20:06:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:28.964 20:06:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:28.964 20:06:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:28.964 20:06:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.964 20:06:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.964 20:06:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.964 20:06:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:28.964 20:06:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:28.964 20:06:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:28.964 20:06:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.964 20:06:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.964 [2024-12-08 20:06:00.728513] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:28.964 20:06:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.964 20:06:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:28.964 20:06:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:28.964 20:06:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:28.964 20:06:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:28.964 20:06:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.964 20:06:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.964 20:06:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.964 20:06:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:28.964 20:06:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:28.964 20:06:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:28.964 20:06:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.964 20:06:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.964 [2024-12-08 20:06:00.875868] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:29.223 20:06:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.223 20:06:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:29.223 20:06:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:29.223 20:06:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:29.223 20:06:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:29.223 20:06:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.223 20:06:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.223 20:06:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.223 20:06:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:29.223 20:06:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:29.223 20:06:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:29.223 20:06:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.223 20:06:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.223 [2024-12-08 20:06:01.029622] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:29.223 [2024-12-08 20:06:01.029713] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:29.223 20:06:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.223 20:06:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:29.223 20:06:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:29.223 20:06:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:29.224 20:06:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:29.224 20:06:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.224 20:06:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.224 20:06:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.224 20:06:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:29.224 20:06:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:29.224 20:06:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:29.224 20:06:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:29.224 20:06:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:29.224 20:06:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:29.224 20:06:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.224 20:06:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.483 BaseBdev2 00:10:29.483 20:06:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.483 20:06:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:29.483 20:06:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:29.484 20:06:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:29.484 20:06:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:29.484 20:06:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:29.484 20:06:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:29.484 20:06:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:29.484 20:06:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.484 20:06:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.484 20:06:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.484 20:06:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:29.484 20:06:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.484 20:06:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.484 [ 00:10:29.484 { 00:10:29.484 "name": "BaseBdev2", 00:10:29.484 "aliases": [ 00:10:29.484 "3391fcf5-2e5f-406c-b3e6-43018f60b454" 00:10:29.484 ], 00:10:29.484 "product_name": "Malloc disk", 00:10:29.484 "block_size": 512, 00:10:29.484 "num_blocks": 65536, 00:10:29.484 "uuid": "3391fcf5-2e5f-406c-b3e6-43018f60b454", 00:10:29.484 "assigned_rate_limits": { 00:10:29.484 "rw_ios_per_sec": 0, 00:10:29.484 "rw_mbytes_per_sec": 0, 00:10:29.484 "r_mbytes_per_sec": 0, 00:10:29.484 "w_mbytes_per_sec": 0 00:10:29.484 }, 00:10:29.484 "claimed": false, 00:10:29.484 "zoned": false, 00:10:29.484 "supported_io_types": { 00:10:29.484 "read": true, 00:10:29.484 "write": true, 00:10:29.484 "unmap": true, 00:10:29.484 "flush": true, 00:10:29.484 "reset": true, 00:10:29.484 "nvme_admin": false, 00:10:29.484 "nvme_io": false, 00:10:29.484 "nvme_io_md": false, 00:10:29.484 "write_zeroes": true, 00:10:29.484 "zcopy": true, 00:10:29.484 "get_zone_info": false, 00:10:29.484 "zone_management": false, 00:10:29.484 "zone_append": false, 00:10:29.484 "compare": false, 00:10:29.484 "compare_and_write": false, 00:10:29.484 "abort": true, 00:10:29.484 "seek_hole": false, 00:10:29.484 "seek_data": false, 00:10:29.484 "copy": true, 00:10:29.484 "nvme_iov_md": false 00:10:29.484 }, 00:10:29.484 "memory_domains": [ 00:10:29.484 { 00:10:29.484 "dma_device_id": "system", 00:10:29.484 "dma_device_type": 1 00:10:29.484 }, 00:10:29.484 { 00:10:29.484 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:29.484 "dma_device_type": 2 00:10:29.484 } 00:10:29.484 ], 00:10:29.484 "driver_specific": {} 00:10:29.484 } 00:10:29.484 ] 00:10:29.484 20:06:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.484 20:06:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:29.484 20:06:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:29.484 20:06:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:29.484 20:06:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:29.484 20:06:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.484 20:06:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.484 BaseBdev3 00:10:29.484 20:06:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.484 20:06:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:29.484 20:06:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:29.484 20:06:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:29.484 20:06:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:29.484 20:06:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:29.484 20:06:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:29.484 20:06:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:29.484 20:06:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.484 20:06:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.484 20:06:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.484 20:06:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:29.484 20:06:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.484 20:06:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.484 [ 00:10:29.484 { 00:10:29.484 "name": "BaseBdev3", 00:10:29.484 "aliases": [ 00:10:29.484 "3001e044-e696-4987-994d-9661d0febf24" 00:10:29.484 ], 00:10:29.484 "product_name": "Malloc disk", 00:10:29.484 "block_size": 512, 00:10:29.484 "num_blocks": 65536, 00:10:29.484 "uuid": "3001e044-e696-4987-994d-9661d0febf24", 00:10:29.484 "assigned_rate_limits": { 00:10:29.484 "rw_ios_per_sec": 0, 00:10:29.484 "rw_mbytes_per_sec": 0, 00:10:29.484 "r_mbytes_per_sec": 0, 00:10:29.484 "w_mbytes_per_sec": 0 00:10:29.484 }, 00:10:29.484 "claimed": false, 00:10:29.484 "zoned": false, 00:10:29.484 "supported_io_types": { 00:10:29.484 "read": true, 00:10:29.484 "write": true, 00:10:29.484 "unmap": true, 00:10:29.484 "flush": true, 00:10:29.484 "reset": true, 00:10:29.484 "nvme_admin": false, 00:10:29.484 "nvme_io": false, 00:10:29.484 "nvme_io_md": false, 00:10:29.484 "write_zeroes": true, 00:10:29.484 "zcopy": true, 00:10:29.484 "get_zone_info": false, 00:10:29.484 "zone_management": false, 00:10:29.484 "zone_append": false, 00:10:29.484 "compare": false, 00:10:29.484 "compare_and_write": false, 00:10:29.484 "abort": true, 00:10:29.484 "seek_hole": false, 00:10:29.484 "seek_data": false, 00:10:29.484 "copy": true, 00:10:29.484 "nvme_iov_md": false 00:10:29.484 }, 00:10:29.484 "memory_domains": [ 00:10:29.484 { 00:10:29.484 "dma_device_id": "system", 00:10:29.484 "dma_device_type": 1 00:10:29.484 }, 00:10:29.484 { 00:10:29.484 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:29.484 "dma_device_type": 2 00:10:29.484 } 00:10:29.484 ], 00:10:29.484 "driver_specific": {} 00:10:29.484 } 00:10:29.484 ] 00:10:29.484 20:06:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.484 20:06:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:29.484 20:06:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:29.484 20:06:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:29.484 20:06:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:29.484 20:06:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.484 20:06:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.484 BaseBdev4 00:10:29.484 20:06:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.484 20:06:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:29.484 20:06:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:29.484 20:06:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:29.484 20:06:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:29.484 20:06:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:29.484 20:06:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:29.484 20:06:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:29.484 20:06:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.484 20:06:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.484 20:06:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.484 20:06:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:29.484 20:06:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.484 20:06:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.484 [ 00:10:29.484 { 00:10:29.484 "name": "BaseBdev4", 00:10:29.484 "aliases": [ 00:10:29.484 "364e3f8f-1287-43c2-9d98-b1a4782c0809" 00:10:29.484 ], 00:10:29.484 "product_name": "Malloc disk", 00:10:29.484 "block_size": 512, 00:10:29.484 "num_blocks": 65536, 00:10:29.484 "uuid": "364e3f8f-1287-43c2-9d98-b1a4782c0809", 00:10:29.484 "assigned_rate_limits": { 00:10:29.484 "rw_ios_per_sec": 0, 00:10:29.484 "rw_mbytes_per_sec": 0, 00:10:29.484 "r_mbytes_per_sec": 0, 00:10:29.484 "w_mbytes_per_sec": 0 00:10:29.484 }, 00:10:29.484 "claimed": false, 00:10:29.484 "zoned": false, 00:10:29.484 "supported_io_types": { 00:10:29.484 "read": true, 00:10:29.484 "write": true, 00:10:29.484 "unmap": true, 00:10:29.484 "flush": true, 00:10:29.484 "reset": true, 00:10:29.484 "nvme_admin": false, 00:10:29.484 "nvme_io": false, 00:10:29.484 "nvme_io_md": false, 00:10:29.484 "write_zeroes": true, 00:10:29.484 "zcopy": true, 00:10:29.484 "get_zone_info": false, 00:10:29.484 "zone_management": false, 00:10:29.484 "zone_append": false, 00:10:29.484 "compare": false, 00:10:29.484 "compare_and_write": false, 00:10:29.484 "abort": true, 00:10:29.484 "seek_hole": false, 00:10:29.484 "seek_data": false, 00:10:29.485 "copy": true, 00:10:29.485 "nvme_iov_md": false 00:10:29.485 }, 00:10:29.485 "memory_domains": [ 00:10:29.485 { 00:10:29.485 "dma_device_id": "system", 00:10:29.485 "dma_device_type": 1 00:10:29.485 }, 00:10:29.485 { 00:10:29.485 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:29.485 "dma_device_type": 2 00:10:29.485 } 00:10:29.485 ], 00:10:29.485 "driver_specific": {} 00:10:29.485 } 00:10:29.485 ] 00:10:29.485 20:06:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.485 20:06:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:29.485 20:06:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:29.485 20:06:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:29.485 20:06:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:29.485 20:06:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.485 20:06:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.485 [2024-12-08 20:06:01.430722] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:29.485 [2024-12-08 20:06:01.430805] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:29.485 [2024-12-08 20:06:01.430862] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:29.485 [2024-12-08 20:06:01.432662] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:29.485 [2024-12-08 20:06:01.432759] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:29.485 20:06:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.485 20:06:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:29.485 20:06:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:29.485 20:06:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:29.485 20:06:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:29.485 20:06:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:29.485 20:06:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:29.485 20:06:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:29.485 20:06:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:29.485 20:06:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:29.485 20:06:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:29.485 20:06:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:29.485 20:06:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:29.485 20:06:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.485 20:06:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.745 20:06:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.745 20:06:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:29.745 "name": "Existed_Raid", 00:10:29.745 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:29.745 "strip_size_kb": 64, 00:10:29.745 "state": "configuring", 00:10:29.745 "raid_level": "concat", 00:10:29.745 "superblock": false, 00:10:29.745 "num_base_bdevs": 4, 00:10:29.745 "num_base_bdevs_discovered": 3, 00:10:29.745 "num_base_bdevs_operational": 4, 00:10:29.745 "base_bdevs_list": [ 00:10:29.745 { 00:10:29.745 "name": "BaseBdev1", 00:10:29.745 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:29.745 "is_configured": false, 00:10:29.745 "data_offset": 0, 00:10:29.745 "data_size": 0 00:10:29.745 }, 00:10:29.745 { 00:10:29.745 "name": "BaseBdev2", 00:10:29.745 "uuid": "3391fcf5-2e5f-406c-b3e6-43018f60b454", 00:10:29.745 "is_configured": true, 00:10:29.745 "data_offset": 0, 00:10:29.745 "data_size": 65536 00:10:29.745 }, 00:10:29.745 { 00:10:29.745 "name": "BaseBdev3", 00:10:29.745 "uuid": "3001e044-e696-4987-994d-9661d0febf24", 00:10:29.745 "is_configured": true, 00:10:29.745 "data_offset": 0, 00:10:29.745 "data_size": 65536 00:10:29.745 }, 00:10:29.745 { 00:10:29.745 "name": "BaseBdev4", 00:10:29.745 "uuid": "364e3f8f-1287-43c2-9d98-b1a4782c0809", 00:10:29.745 "is_configured": true, 00:10:29.745 "data_offset": 0, 00:10:29.745 "data_size": 65536 00:10:29.745 } 00:10:29.745 ] 00:10:29.745 }' 00:10:29.745 20:06:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:29.745 20:06:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.005 20:06:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:30.005 20:06:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.005 20:06:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.005 [2024-12-08 20:06:01.862010] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:30.005 20:06:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.005 20:06:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:30.005 20:06:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:30.005 20:06:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:30.005 20:06:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:30.005 20:06:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:30.005 20:06:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:30.005 20:06:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:30.005 20:06:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:30.005 20:06:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:30.005 20:06:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:30.005 20:06:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:30.005 20:06:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:30.005 20:06:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.005 20:06:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.005 20:06:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.005 20:06:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:30.005 "name": "Existed_Raid", 00:10:30.005 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:30.005 "strip_size_kb": 64, 00:10:30.005 "state": "configuring", 00:10:30.005 "raid_level": "concat", 00:10:30.005 "superblock": false, 00:10:30.005 "num_base_bdevs": 4, 00:10:30.005 "num_base_bdevs_discovered": 2, 00:10:30.005 "num_base_bdevs_operational": 4, 00:10:30.005 "base_bdevs_list": [ 00:10:30.005 { 00:10:30.005 "name": "BaseBdev1", 00:10:30.005 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:30.005 "is_configured": false, 00:10:30.005 "data_offset": 0, 00:10:30.005 "data_size": 0 00:10:30.005 }, 00:10:30.005 { 00:10:30.005 "name": null, 00:10:30.005 "uuid": "3391fcf5-2e5f-406c-b3e6-43018f60b454", 00:10:30.005 "is_configured": false, 00:10:30.005 "data_offset": 0, 00:10:30.005 "data_size": 65536 00:10:30.005 }, 00:10:30.005 { 00:10:30.005 "name": "BaseBdev3", 00:10:30.005 "uuid": "3001e044-e696-4987-994d-9661d0febf24", 00:10:30.005 "is_configured": true, 00:10:30.005 "data_offset": 0, 00:10:30.005 "data_size": 65536 00:10:30.005 }, 00:10:30.005 { 00:10:30.005 "name": "BaseBdev4", 00:10:30.005 "uuid": "364e3f8f-1287-43c2-9d98-b1a4782c0809", 00:10:30.005 "is_configured": true, 00:10:30.005 "data_offset": 0, 00:10:30.005 "data_size": 65536 00:10:30.005 } 00:10:30.005 ] 00:10:30.005 }' 00:10:30.005 20:06:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:30.005 20:06:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.573 20:06:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:30.573 20:06:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:30.573 20:06:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.573 20:06:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.573 20:06:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.573 20:06:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:30.573 20:06:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:30.573 20:06:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.573 20:06:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.573 [2024-12-08 20:06:02.408272] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:30.573 BaseBdev1 00:10:30.573 20:06:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.573 20:06:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:30.573 20:06:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:30.573 20:06:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:30.573 20:06:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:30.573 20:06:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:30.573 20:06:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:30.573 20:06:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:30.573 20:06:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.573 20:06:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.573 20:06:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.573 20:06:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:30.573 20:06:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.573 20:06:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.573 [ 00:10:30.573 { 00:10:30.573 "name": "BaseBdev1", 00:10:30.573 "aliases": [ 00:10:30.573 "99472f09-9b01-49c2-b79d-f3b1ab9d589f" 00:10:30.573 ], 00:10:30.573 "product_name": "Malloc disk", 00:10:30.573 "block_size": 512, 00:10:30.573 "num_blocks": 65536, 00:10:30.573 "uuid": "99472f09-9b01-49c2-b79d-f3b1ab9d589f", 00:10:30.573 "assigned_rate_limits": { 00:10:30.573 "rw_ios_per_sec": 0, 00:10:30.573 "rw_mbytes_per_sec": 0, 00:10:30.573 "r_mbytes_per_sec": 0, 00:10:30.573 "w_mbytes_per_sec": 0 00:10:30.573 }, 00:10:30.573 "claimed": true, 00:10:30.574 "claim_type": "exclusive_write", 00:10:30.574 "zoned": false, 00:10:30.574 "supported_io_types": { 00:10:30.574 "read": true, 00:10:30.574 "write": true, 00:10:30.574 "unmap": true, 00:10:30.574 "flush": true, 00:10:30.574 "reset": true, 00:10:30.574 "nvme_admin": false, 00:10:30.574 "nvme_io": false, 00:10:30.574 "nvme_io_md": false, 00:10:30.574 "write_zeroes": true, 00:10:30.574 "zcopy": true, 00:10:30.574 "get_zone_info": false, 00:10:30.574 "zone_management": false, 00:10:30.574 "zone_append": false, 00:10:30.574 "compare": false, 00:10:30.574 "compare_and_write": false, 00:10:30.574 "abort": true, 00:10:30.574 "seek_hole": false, 00:10:30.574 "seek_data": false, 00:10:30.574 "copy": true, 00:10:30.574 "nvme_iov_md": false 00:10:30.574 }, 00:10:30.574 "memory_domains": [ 00:10:30.574 { 00:10:30.574 "dma_device_id": "system", 00:10:30.574 "dma_device_type": 1 00:10:30.574 }, 00:10:30.574 { 00:10:30.574 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:30.574 "dma_device_type": 2 00:10:30.574 } 00:10:30.574 ], 00:10:30.574 "driver_specific": {} 00:10:30.574 } 00:10:30.574 ] 00:10:30.574 20:06:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.574 20:06:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:30.574 20:06:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:30.574 20:06:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:30.574 20:06:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:30.574 20:06:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:30.574 20:06:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:30.574 20:06:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:30.574 20:06:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:30.574 20:06:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:30.574 20:06:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:30.574 20:06:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:30.574 20:06:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:30.574 20:06:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:30.574 20:06:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.574 20:06:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.574 20:06:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.574 20:06:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:30.574 "name": "Existed_Raid", 00:10:30.574 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:30.574 "strip_size_kb": 64, 00:10:30.574 "state": "configuring", 00:10:30.574 "raid_level": "concat", 00:10:30.574 "superblock": false, 00:10:30.574 "num_base_bdevs": 4, 00:10:30.574 "num_base_bdevs_discovered": 3, 00:10:30.574 "num_base_bdevs_operational": 4, 00:10:30.574 "base_bdevs_list": [ 00:10:30.574 { 00:10:30.574 "name": "BaseBdev1", 00:10:30.574 "uuid": "99472f09-9b01-49c2-b79d-f3b1ab9d589f", 00:10:30.574 "is_configured": true, 00:10:30.574 "data_offset": 0, 00:10:30.574 "data_size": 65536 00:10:30.574 }, 00:10:30.574 { 00:10:30.574 "name": null, 00:10:30.574 "uuid": "3391fcf5-2e5f-406c-b3e6-43018f60b454", 00:10:30.574 "is_configured": false, 00:10:30.574 "data_offset": 0, 00:10:30.574 "data_size": 65536 00:10:30.574 }, 00:10:30.574 { 00:10:30.574 "name": "BaseBdev3", 00:10:30.574 "uuid": "3001e044-e696-4987-994d-9661d0febf24", 00:10:30.574 "is_configured": true, 00:10:30.574 "data_offset": 0, 00:10:30.574 "data_size": 65536 00:10:30.574 }, 00:10:30.574 { 00:10:30.574 "name": "BaseBdev4", 00:10:30.574 "uuid": "364e3f8f-1287-43c2-9d98-b1a4782c0809", 00:10:30.574 "is_configured": true, 00:10:30.574 "data_offset": 0, 00:10:30.574 "data_size": 65536 00:10:30.574 } 00:10:30.574 ] 00:10:30.574 }' 00:10:30.574 20:06:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:30.574 20:06:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.143 20:06:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:31.143 20:06:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:31.143 20:06:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.143 20:06:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.143 20:06:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.143 20:06:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:31.143 20:06:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:31.143 20:06:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.143 20:06:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.143 [2024-12-08 20:06:02.899484] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:31.143 20:06:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.143 20:06:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:31.143 20:06:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:31.143 20:06:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:31.143 20:06:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:31.143 20:06:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:31.143 20:06:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:31.143 20:06:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:31.143 20:06:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:31.143 20:06:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:31.143 20:06:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:31.143 20:06:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:31.143 20:06:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:31.143 20:06:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.144 20:06:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.144 20:06:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.144 20:06:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:31.144 "name": "Existed_Raid", 00:10:31.144 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:31.144 "strip_size_kb": 64, 00:10:31.144 "state": "configuring", 00:10:31.144 "raid_level": "concat", 00:10:31.144 "superblock": false, 00:10:31.144 "num_base_bdevs": 4, 00:10:31.144 "num_base_bdevs_discovered": 2, 00:10:31.144 "num_base_bdevs_operational": 4, 00:10:31.144 "base_bdevs_list": [ 00:10:31.144 { 00:10:31.144 "name": "BaseBdev1", 00:10:31.144 "uuid": "99472f09-9b01-49c2-b79d-f3b1ab9d589f", 00:10:31.144 "is_configured": true, 00:10:31.144 "data_offset": 0, 00:10:31.144 "data_size": 65536 00:10:31.144 }, 00:10:31.144 { 00:10:31.144 "name": null, 00:10:31.144 "uuid": "3391fcf5-2e5f-406c-b3e6-43018f60b454", 00:10:31.144 "is_configured": false, 00:10:31.144 "data_offset": 0, 00:10:31.144 "data_size": 65536 00:10:31.144 }, 00:10:31.144 { 00:10:31.144 "name": null, 00:10:31.144 "uuid": "3001e044-e696-4987-994d-9661d0febf24", 00:10:31.144 "is_configured": false, 00:10:31.144 "data_offset": 0, 00:10:31.144 "data_size": 65536 00:10:31.144 }, 00:10:31.144 { 00:10:31.144 "name": "BaseBdev4", 00:10:31.144 "uuid": "364e3f8f-1287-43c2-9d98-b1a4782c0809", 00:10:31.144 "is_configured": true, 00:10:31.144 "data_offset": 0, 00:10:31.144 "data_size": 65536 00:10:31.144 } 00:10:31.144 ] 00:10:31.144 }' 00:10:31.144 20:06:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:31.144 20:06:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.403 20:06:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:31.403 20:06:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.403 20:06:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.403 20:06:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:31.403 20:06:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.663 20:06:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:31.663 20:06:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:31.663 20:06:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.663 20:06:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.663 [2024-12-08 20:06:03.403286] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:31.663 20:06:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.663 20:06:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:31.663 20:06:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:31.663 20:06:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:31.663 20:06:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:31.663 20:06:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:31.663 20:06:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:31.663 20:06:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:31.663 20:06:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:31.663 20:06:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:31.663 20:06:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:31.663 20:06:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:31.663 20:06:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.663 20:06:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.663 20:06:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:31.663 20:06:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.663 20:06:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:31.663 "name": "Existed_Raid", 00:10:31.663 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:31.663 "strip_size_kb": 64, 00:10:31.663 "state": "configuring", 00:10:31.663 "raid_level": "concat", 00:10:31.663 "superblock": false, 00:10:31.663 "num_base_bdevs": 4, 00:10:31.663 "num_base_bdevs_discovered": 3, 00:10:31.663 "num_base_bdevs_operational": 4, 00:10:31.663 "base_bdevs_list": [ 00:10:31.663 { 00:10:31.663 "name": "BaseBdev1", 00:10:31.663 "uuid": "99472f09-9b01-49c2-b79d-f3b1ab9d589f", 00:10:31.663 "is_configured": true, 00:10:31.663 "data_offset": 0, 00:10:31.663 "data_size": 65536 00:10:31.663 }, 00:10:31.663 { 00:10:31.663 "name": null, 00:10:31.663 "uuid": "3391fcf5-2e5f-406c-b3e6-43018f60b454", 00:10:31.663 "is_configured": false, 00:10:31.663 "data_offset": 0, 00:10:31.663 "data_size": 65536 00:10:31.663 }, 00:10:31.663 { 00:10:31.663 "name": "BaseBdev3", 00:10:31.663 "uuid": "3001e044-e696-4987-994d-9661d0febf24", 00:10:31.663 "is_configured": true, 00:10:31.663 "data_offset": 0, 00:10:31.663 "data_size": 65536 00:10:31.663 }, 00:10:31.663 { 00:10:31.663 "name": "BaseBdev4", 00:10:31.663 "uuid": "364e3f8f-1287-43c2-9d98-b1a4782c0809", 00:10:31.663 "is_configured": true, 00:10:31.663 "data_offset": 0, 00:10:31.663 "data_size": 65536 00:10:31.663 } 00:10:31.663 ] 00:10:31.663 }' 00:10:31.663 20:06:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:31.663 20:06:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.923 20:06:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:31.923 20:06:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.923 20:06:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.923 20:06:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:32.183 20:06:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.183 20:06:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:32.183 20:06:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:32.183 20:06:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.183 20:06:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.183 [2024-12-08 20:06:03.950413] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:32.183 20:06:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.183 20:06:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:32.183 20:06:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:32.183 20:06:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:32.183 20:06:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:32.183 20:06:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:32.183 20:06:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:32.183 20:06:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:32.183 20:06:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:32.183 20:06:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:32.183 20:06:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:32.183 20:06:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.183 20:06:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:32.183 20:06:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.183 20:06:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.183 20:06:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.183 20:06:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:32.183 "name": "Existed_Raid", 00:10:32.183 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:32.183 "strip_size_kb": 64, 00:10:32.183 "state": "configuring", 00:10:32.183 "raid_level": "concat", 00:10:32.183 "superblock": false, 00:10:32.183 "num_base_bdevs": 4, 00:10:32.183 "num_base_bdevs_discovered": 2, 00:10:32.183 "num_base_bdevs_operational": 4, 00:10:32.183 "base_bdevs_list": [ 00:10:32.183 { 00:10:32.183 "name": null, 00:10:32.183 "uuid": "99472f09-9b01-49c2-b79d-f3b1ab9d589f", 00:10:32.183 "is_configured": false, 00:10:32.183 "data_offset": 0, 00:10:32.183 "data_size": 65536 00:10:32.183 }, 00:10:32.183 { 00:10:32.183 "name": null, 00:10:32.183 "uuid": "3391fcf5-2e5f-406c-b3e6-43018f60b454", 00:10:32.183 "is_configured": false, 00:10:32.183 "data_offset": 0, 00:10:32.183 "data_size": 65536 00:10:32.183 }, 00:10:32.183 { 00:10:32.183 "name": "BaseBdev3", 00:10:32.183 "uuid": "3001e044-e696-4987-994d-9661d0febf24", 00:10:32.183 "is_configured": true, 00:10:32.183 "data_offset": 0, 00:10:32.183 "data_size": 65536 00:10:32.183 }, 00:10:32.183 { 00:10:32.183 "name": "BaseBdev4", 00:10:32.183 "uuid": "364e3f8f-1287-43c2-9d98-b1a4782c0809", 00:10:32.183 "is_configured": true, 00:10:32.183 "data_offset": 0, 00:10:32.183 "data_size": 65536 00:10:32.183 } 00:10:32.183 ] 00:10:32.183 }' 00:10:32.183 20:06:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:32.183 20:06:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.753 20:06:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.753 20:06:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.753 20:06:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:32.753 20:06:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.753 20:06:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.753 20:06:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:32.753 20:06:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:32.753 20:06:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.753 20:06:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.753 [2024-12-08 20:06:04.542279] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:32.753 20:06:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.753 20:06:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:32.753 20:06:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:32.753 20:06:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:32.753 20:06:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:32.753 20:06:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:32.753 20:06:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:32.753 20:06:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:32.753 20:06:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:32.753 20:06:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:32.753 20:06:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:32.753 20:06:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.753 20:06:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.753 20:06:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.753 20:06:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:32.753 20:06:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.753 20:06:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:32.753 "name": "Existed_Raid", 00:10:32.753 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:32.753 "strip_size_kb": 64, 00:10:32.753 "state": "configuring", 00:10:32.753 "raid_level": "concat", 00:10:32.753 "superblock": false, 00:10:32.753 "num_base_bdevs": 4, 00:10:32.753 "num_base_bdevs_discovered": 3, 00:10:32.753 "num_base_bdevs_operational": 4, 00:10:32.753 "base_bdevs_list": [ 00:10:32.753 { 00:10:32.753 "name": null, 00:10:32.753 "uuid": "99472f09-9b01-49c2-b79d-f3b1ab9d589f", 00:10:32.753 "is_configured": false, 00:10:32.753 "data_offset": 0, 00:10:32.753 "data_size": 65536 00:10:32.753 }, 00:10:32.753 { 00:10:32.753 "name": "BaseBdev2", 00:10:32.753 "uuid": "3391fcf5-2e5f-406c-b3e6-43018f60b454", 00:10:32.753 "is_configured": true, 00:10:32.753 "data_offset": 0, 00:10:32.753 "data_size": 65536 00:10:32.753 }, 00:10:32.753 { 00:10:32.753 "name": "BaseBdev3", 00:10:32.753 "uuid": "3001e044-e696-4987-994d-9661d0febf24", 00:10:32.753 "is_configured": true, 00:10:32.753 "data_offset": 0, 00:10:32.753 "data_size": 65536 00:10:32.753 }, 00:10:32.753 { 00:10:32.753 "name": "BaseBdev4", 00:10:32.753 "uuid": "364e3f8f-1287-43c2-9d98-b1a4782c0809", 00:10:32.753 "is_configured": true, 00:10:32.753 "data_offset": 0, 00:10:32.753 "data_size": 65536 00:10:32.753 } 00:10:32.753 ] 00:10:32.753 }' 00:10:32.753 20:06:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:32.753 20:06:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.322 20:06:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:33.322 20:06:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:33.322 20:06:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.322 20:06:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.322 20:06:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.322 20:06:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:33.322 20:06:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:33.322 20:06:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.322 20:06:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:33.322 20:06:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.322 20:06:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.322 20:06:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 99472f09-9b01-49c2-b79d-f3b1ab9d589f 00:10:33.322 20:06:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.322 20:06:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.322 [2024-12-08 20:06:05.153090] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:33.322 [2024-12-08 20:06:05.153136] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:33.322 [2024-12-08 20:06:05.153143] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:10:33.322 [2024-12-08 20:06:05.153387] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:10:33.322 [2024-12-08 20:06:05.153525] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:33.322 [2024-12-08 20:06:05.153535] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:33.322 [2024-12-08 20:06:05.153774] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:33.322 NewBaseBdev 00:10:33.322 20:06:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.322 20:06:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:33.322 20:06:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:33.322 20:06:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:33.322 20:06:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:33.322 20:06:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:33.322 20:06:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:33.322 20:06:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:33.322 20:06:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.322 20:06:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.322 20:06:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.322 20:06:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:33.322 20:06:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.322 20:06:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.322 [ 00:10:33.322 { 00:10:33.322 "name": "NewBaseBdev", 00:10:33.322 "aliases": [ 00:10:33.322 "99472f09-9b01-49c2-b79d-f3b1ab9d589f" 00:10:33.322 ], 00:10:33.322 "product_name": "Malloc disk", 00:10:33.322 "block_size": 512, 00:10:33.322 "num_blocks": 65536, 00:10:33.322 "uuid": "99472f09-9b01-49c2-b79d-f3b1ab9d589f", 00:10:33.322 "assigned_rate_limits": { 00:10:33.322 "rw_ios_per_sec": 0, 00:10:33.322 "rw_mbytes_per_sec": 0, 00:10:33.322 "r_mbytes_per_sec": 0, 00:10:33.322 "w_mbytes_per_sec": 0 00:10:33.322 }, 00:10:33.322 "claimed": true, 00:10:33.323 "claim_type": "exclusive_write", 00:10:33.323 "zoned": false, 00:10:33.323 "supported_io_types": { 00:10:33.323 "read": true, 00:10:33.323 "write": true, 00:10:33.323 "unmap": true, 00:10:33.323 "flush": true, 00:10:33.323 "reset": true, 00:10:33.323 "nvme_admin": false, 00:10:33.323 "nvme_io": false, 00:10:33.323 "nvme_io_md": false, 00:10:33.323 "write_zeroes": true, 00:10:33.323 "zcopy": true, 00:10:33.323 "get_zone_info": false, 00:10:33.323 "zone_management": false, 00:10:33.323 "zone_append": false, 00:10:33.323 "compare": false, 00:10:33.323 "compare_and_write": false, 00:10:33.323 "abort": true, 00:10:33.323 "seek_hole": false, 00:10:33.323 "seek_data": false, 00:10:33.323 "copy": true, 00:10:33.323 "nvme_iov_md": false 00:10:33.323 }, 00:10:33.323 "memory_domains": [ 00:10:33.323 { 00:10:33.323 "dma_device_id": "system", 00:10:33.323 "dma_device_type": 1 00:10:33.323 }, 00:10:33.323 { 00:10:33.323 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:33.323 "dma_device_type": 2 00:10:33.323 } 00:10:33.323 ], 00:10:33.323 "driver_specific": {} 00:10:33.323 } 00:10:33.323 ] 00:10:33.323 20:06:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.323 20:06:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:33.323 20:06:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:10:33.323 20:06:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:33.323 20:06:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:33.323 20:06:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:33.323 20:06:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:33.323 20:06:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:33.323 20:06:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:33.323 20:06:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:33.323 20:06:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:33.323 20:06:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:33.323 20:06:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:33.323 20:06:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:33.323 20:06:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.323 20:06:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.323 20:06:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.323 20:06:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:33.323 "name": "Existed_Raid", 00:10:33.323 "uuid": "e2f53743-873b-4c8f-b5bc-6d586e19373a", 00:10:33.323 "strip_size_kb": 64, 00:10:33.323 "state": "online", 00:10:33.323 "raid_level": "concat", 00:10:33.323 "superblock": false, 00:10:33.323 "num_base_bdevs": 4, 00:10:33.323 "num_base_bdevs_discovered": 4, 00:10:33.323 "num_base_bdevs_operational": 4, 00:10:33.323 "base_bdevs_list": [ 00:10:33.323 { 00:10:33.323 "name": "NewBaseBdev", 00:10:33.323 "uuid": "99472f09-9b01-49c2-b79d-f3b1ab9d589f", 00:10:33.323 "is_configured": true, 00:10:33.323 "data_offset": 0, 00:10:33.323 "data_size": 65536 00:10:33.323 }, 00:10:33.323 { 00:10:33.323 "name": "BaseBdev2", 00:10:33.323 "uuid": "3391fcf5-2e5f-406c-b3e6-43018f60b454", 00:10:33.323 "is_configured": true, 00:10:33.323 "data_offset": 0, 00:10:33.323 "data_size": 65536 00:10:33.323 }, 00:10:33.323 { 00:10:33.323 "name": "BaseBdev3", 00:10:33.323 "uuid": "3001e044-e696-4987-994d-9661d0febf24", 00:10:33.323 "is_configured": true, 00:10:33.323 "data_offset": 0, 00:10:33.323 "data_size": 65536 00:10:33.323 }, 00:10:33.323 { 00:10:33.323 "name": "BaseBdev4", 00:10:33.323 "uuid": "364e3f8f-1287-43c2-9d98-b1a4782c0809", 00:10:33.323 "is_configured": true, 00:10:33.323 "data_offset": 0, 00:10:33.323 "data_size": 65536 00:10:33.323 } 00:10:33.323 ] 00:10:33.323 }' 00:10:33.323 20:06:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:33.323 20:06:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.890 20:06:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:33.890 20:06:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:33.890 20:06:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:33.890 20:06:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:33.890 20:06:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:33.890 20:06:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:33.890 20:06:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:33.890 20:06:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:33.890 20:06:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.890 20:06:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.890 [2024-12-08 20:06:05.680599] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:33.890 20:06:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.890 20:06:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:33.890 "name": "Existed_Raid", 00:10:33.890 "aliases": [ 00:10:33.890 "e2f53743-873b-4c8f-b5bc-6d586e19373a" 00:10:33.890 ], 00:10:33.890 "product_name": "Raid Volume", 00:10:33.890 "block_size": 512, 00:10:33.890 "num_blocks": 262144, 00:10:33.890 "uuid": "e2f53743-873b-4c8f-b5bc-6d586e19373a", 00:10:33.890 "assigned_rate_limits": { 00:10:33.890 "rw_ios_per_sec": 0, 00:10:33.890 "rw_mbytes_per_sec": 0, 00:10:33.890 "r_mbytes_per_sec": 0, 00:10:33.890 "w_mbytes_per_sec": 0 00:10:33.890 }, 00:10:33.890 "claimed": false, 00:10:33.890 "zoned": false, 00:10:33.890 "supported_io_types": { 00:10:33.890 "read": true, 00:10:33.890 "write": true, 00:10:33.890 "unmap": true, 00:10:33.890 "flush": true, 00:10:33.890 "reset": true, 00:10:33.890 "nvme_admin": false, 00:10:33.890 "nvme_io": false, 00:10:33.890 "nvme_io_md": false, 00:10:33.890 "write_zeroes": true, 00:10:33.890 "zcopy": false, 00:10:33.890 "get_zone_info": false, 00:10:33.890 "zone_management": false, 00:10:33.890 "zone_append": false, 00:10:33.890 "compare": false, 00:10:33.890 "compare_and_write": false, 00:10:33.890 "abort": false, 00:10:33.890 "seek_hole": false, 00:10:33.890 "seek_data": false, 00:10:33.890 "copy": false, 00:10:33.890 "nvme_iov_md": false 00:10:33.890 }, 00:10:33.890 "memory_domains": [ 00:10:33.890 { 00:10:33.890 "dma_device_id": "system", 00:10:33.890 "dma_device_type": 1 00:10:33.890 }, 00:10:33.890 { 00:10:33.890 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:33.890 "dma_device_type": 2 00:10:33.890 }, 00:10:33.890 { 00:10:33.890 "dma_device_id": "system", 00:10:33.890 "dma_device_type": 1 00:10:33.890 }, 00:10:33.890 { 00:10:33.890 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:33.890 "dma_device_type": 2 00:10:33.890 }, 00:10:33.890 { 00:10:33.890 "dma_device_id": "system", 00:10:33.890 "dma_device_type": 1 00:10:33.890 }, 00:10:33.890 { 00:10:33.890 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:33.890 "dma_device_type": 2 00:10:33.890 }, 00:10:33.890 { 00:10:33.890 "dma_device_id": "system", 00:10:33.890 "dma_device_type": 1 00:10:33.890 }, 00:10:33.890 { 00:10:33.890 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:33.890 "dma_device_type": 2 00:10:33.890 } 00:10:33.890 ], 00:10:33.890 "driver_specific": { 00:10:33.890 "raid": { 00:10:33.890 "uuid": "e2f53743-873b-4c8f-b5bc-6d586e19373a", 00:10:33.890 "strip_size_kb": 64, 00:10:33.890 "state": "online", 00:10:33.890 "raid_level": "concat", 00:10:33.890 "superblock": false, 00:10:33.890 "num_base_bdevs": 4, 00:10:33.890 "num_base_bdevs_discovered": 4, 00:10:33.890 "num_base_bdevs_operational": 4, 00:10:33.890 "base_bdevs_list": [ 00:10:33.890 { 00:10:33.890 "name": "NewBaseBdev", 00:10:33.890 "uuid": "99472f09-9b01-49c2-b79d-f3b1ab9d589f", 00:10:33.890 "is_configured": true, 00:10:33.890 "data_offset": 0, 00:10:33.890 "data_size": 65536 00:10:33.890 }, 00:10:33.890 { 00:10:33.890 "name": "BaseBdev2", 00:10:33.890 "uuid": "3391fcf5-2e5f-406c-b3e6-43018f60b454", 00:10:33.890 "is_configured": true, 00:10:33.890 "data_offset": 0, 00:10:33.890 "data_size": 65536 00:10:33.890 }, 00:10:33.890 { 00:10:33.890 "name": "BaseBdev3", 00:10:33.890 "uuid": "3001e044-e696-4987-994d-9661d0febf24", 00:10:33.890 "is_configured": true, 00:10:33.890 "data_offset": 0, 00:10:33.890 "data_size": 65536 00:10:33.890 }, 00:10:33.890 { 00:10:33.890 "name": "BaseBdev4", 00:10:33.890 "uuid": "364e3f8f-1287-43c2-9d98-b1a4782c0809", 00:10:33.890 "is_configured": true, 00:10:33.890 "data_offset": 0, 00:10:33.890 "data_size": 65536 00:10:33.890 } 00:10:33.890 ] 00:10:33.890 } 00:10:33.890 } 00:10:33.890 }' 00:10:33.890 20:06:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:33.890 20:06:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:33.890 BaseBdev2 00:10:33.890 BaseBdev3 00:10:33.890 BaseBdev4' 00:10:33.890 20:06:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:33.890 20:06:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:33.891 20:06:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:33.891 20:06:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:33.891 20:06:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:33.891 20:06:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.891 20:06:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.891 20:06:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.891 20:06:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:33.891 20:06:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:33.891 20:06:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:33.891 20:06:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:33.891 20:06:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:33.891 20:06:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.891 20:06:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.891 20:06:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.150 20:06:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:34.150 20:06:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:34.150 20:06:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:34.150 20:06:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:34.150 20:06:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:34.150 20:06:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.150 20:06:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.150 20:06:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.150 20:06:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:34.150 20:06:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:34.150 20:06:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:34.150 20:06:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:34.150 20:06:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:34.150 20:06:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.151 20:06:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.151 20:06:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.151 20:06:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:34.151 20:06:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:34.151 20:06:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:34.151 20:06:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.151 20:06:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.151 [2024-12-08 20:06:05.975723] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:34.151 [2024-12-08 20:06:05.975795] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:34.151 [2024-12-08 20:06:05.975892] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:34.151 [2024-12-08 20:06:05.976019] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:34.151 [2024-12-08 20:06:05.976066] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:34.151 20:06:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.151 20:06:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 71081 00:10:34.151 20:06:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 71081 ']' 00:10:34.151 20:06:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 71081 00:10:34.151 20:06:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:10:34.151 20:06:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:34.151 20:06:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71081 00:10:34.151 killing process with pid 71081 00:10:34.151 20:06:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:34.151 20:06:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:34.151 20:06:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71081' 00:10:34.151 20:06:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 71081 00:10:34.151 [2024-12-08 20:06:06.013723] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:34.151 20:06:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 71081 00:10:34.718 [2024-12-08 20:06:06.394868] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:35.686 20:06:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:35.686 00:10:35.686 real 0m11.621s 00:10:35.686 user 0m18.524s 00:10:35.686 sys 0m2.058s 00:10:35.686 20:06:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:35.686 20:06:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.686 ************************************ 00:10:35.686 END TEST raid_state_function_test 00:10:35.686 ************************************ 00:10:35.686 20:06:07 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:10:35.686 20:06:07 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:35.686 20:06:07 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:35.686 20:06:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:35.686 ************************************ 00:10:35.686 START TEST raid_state_function_test_sb 00:10:35.686 ************************************ 00:10:35.686 20:06:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 4 true 00:10:35.686 20:06:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:10:35.686 20:06:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:35.686 20:06:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:35.686 20:06:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:35.686 20:06:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:35.686 20:06:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:35.686 20:06:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:35.686 20:06:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:35.686 20:06:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:35.686 20:06:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:35.686 20:06:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:35.686 20:06:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:35.686 20:06:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:35.686 20:06:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:35.686 20:06:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:35.686 20:06:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:35.686 20:06:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:35.686 20:06:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:35.686 20:06:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:35.686 20:06:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:35.686 20:06:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:35.686 20:06:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:35.686 20:06:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:35.686 20:06:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:35.686 20:06:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:10:35.686 20:06:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:35.686 20:06:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:35.686 20:06:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:35.686 20:06:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:35.686 20:06:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=71757 00:10:35.686 Process raid pid: 71757 00:10:35.686 20:06:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:35.686 20:06:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 71757' 00:10:35.686 20:06:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 71757 00:10:35.686 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:35.686 20:06:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 71757 ']' 00:10:35.686 20:06:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:35.686 20:06:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:35.686 20:06:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:35.686 20:06:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:35.686 20:06:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.945 [2024-12-08 20:06:07.664191] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:10:35.945 [2024-12-08 20:06:07.664306] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:35.945 [2024-12-08 20:06:07.835604] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:36.205 [2024-12-08 20:06:07.946163] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:36.205 [2024-12-08 20:06:08.145818] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:36.205 [2024-12-08 20:06:08.145856] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:36.773 20:06:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:36.773 20:06:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:10:36.773 20:06:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:36.773 20:06:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.773 20:06:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.773 [2024-12-08 20:06:08.491036] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:36.773 [2024-12-08 20:06:08.491172] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:36.773 [2024-12-08 20:06:08.491235] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:36.773 [2024-12-08 20:06:08.491248] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:36.773 [2024-12-08 20:06:08.491255] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:36.773 [2024-12-08 20:06:08.491263] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:36.773 [2024-12-08 20:06:08.491270] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:36.773 [2024-12-08 20:06:08.491279] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:36.773 20:06:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.773 20:06:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:36.773 20:06:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:36.773 20:06:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:36.773 20:06:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:36.773 20:06:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:36.773 20:06:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:36.773 20:06:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:36.773 20:06:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:36.773 20:06:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:36.773 20:06:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:36.773 20:06:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.773 20:06:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:36.773 20:06:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.773 20:06:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.773 20:06:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.773 20:06:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:36.773 "name": "Existed_Raid", 00:10:36.773 "uuid": "594877f2-4f9f-445c-bc26-01c2c9fb6982", 00:10:36.773 "strip_size_kb": 64, 00:10:36.773 "state": "configuring", 00:10:36.773 "raid_level": "concat", 00:10:36.773 "superblock": true, 00:10:36.773 "num_base_bdevs": 4, 00:10:36.773 "num_base_bdevs_discovered": 0, 00:10:36.773 "num_base_bdevs_operational": 4, 00:10:36.773 "base_bdevs_list": [ 00:10:36.773 { 00:10:36.773 "name": "BaseBdev1", 00:10:36.773 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:36.773 "is_configured": false, 00:10:36.773 "data_offset": 0, 00:10:36.773 "data_size": 0 00:10:36.773 }, 00:10:36.773 { 00:10:36.773 "name": "BaseBdev2", 00:10:36.773 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:36.773 "is_configured": false, 00:10:36.773 "data_offset": 0, 00:10:36.773 "data_size": 0 00:10:36.773 }, 00:10:36.773 { 00:10:36.773 "name": "BaseBdev3", 00:10:36.773 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:36.773 "is_configured": false, 00:10:36.773 "data_offset": 0, 00:10:36.773 "data_size": 0 00:10:36.773 }, 00:10:36.773 { 00:10:36.773 "name": "BaseBdev4", 00:10:36.773 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:36.773 "is_configured": false, 00:10:36.773 "data_offset": 0, 00:10:36.773 "data_size": 0 00:10:36.773 } 00:10:36.773 ] 00:10:36.773 }' 00:10:36.773 20:06:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:36.773 20:06:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.033 20:06:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:37.033 20:06:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.033 20:06:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.033 [2024-12-08 20:06:08.966133] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:37.033 [2024-12-08 20:06:08.966172] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:37.033 20:06:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.033 20:06:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:37.033 20:06:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.033 20:06:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.033 [2024-12-08 20:06:08.974125] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:37.033 [2024-12-08 20:06:08.974166] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:37.033 [2024-12-08 20:06:08.974175] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:37.033 [2024-12-08 20:06:08.974200] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:37.033 [2024-12-08 20:06:08.974206] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:37.033 [2024-12-08 20:06:08.974214] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:37.033 [2024-12-08 20:06:08.974220] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:37.033 [2024-12-08 20:06:08.974228] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:37.033 20:06:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.033 20:06:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:37.033 20:06:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.033 20:06:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.293 [2024-12-08 20:06:09.017204] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:37.293 BaseBdev1 00:10:37.293 20:06:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.293 20:06:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:37.293 20:06:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:37.293 20:06:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:37.293 20:06:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:37.293 20:06:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:37.293 20:06:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:37.293 20:06:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:37.293 20:06:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.293 20:06:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.293 20:06:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.293 20:06:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:37.293 20:06:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.293 20:06:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.293 [ 00:10:37.293 { 00:10:37.293 "name": "BaseBdev1", 00:10:37.293 "aliases": [ 00:10:37.293 "d275492c-d89f-43c2-b678-1dbae1d6b840" 00:10:37.293 ], 00:10:37.293 "product_name": "Malloc disk", 00:10:37.293 "block_size": 512, 00:10:37.293 "num_blocks": 65536, 00:10:37.293 "uuid": "d275492c-d89f-43c2-b678-1dbae1d6b840", 00:10:37.293 "assigned_rate_limits": { 00:10:37.293 "rw_ios_per_sec": 0, 00:10:37.293 "rw_mbytes_per_sec": 0, 00:10:37.293 "r_mbytes_per_sec": 0, 00:10:37.293 "w_mbytes_per_sec": 0 00:10:37.293 }, 00:10:37.293 "claimed": true, 00:10:37.293 "claim_type": "exclusive_write", 00:10:37.293 "zoned": false, 00:10:37.293 "supported_io_types": { 00:10:37.293 "read": true, 00:10:37.293 "write": true, 00:10:37.293 "unmap": true, 00:10:37.293 "flush": true, 00:10:37.293 "reset": true, 00:10:37.293 "nvme_admin": false, 00:10:37.293 "nvme_io": false, 00:10:37.293 "nvme_io_md": false, 00:10:37.293 "write_zeroes": true, 00:10:37.293 "zcopy": true, 00:10:37.293 "get_zone_info": false, 00:10:37.293 "zone_management": false, 00:10:37.293 "zone_append": false, 00:10:37.293 "compare": false, 00:10:37.293 "compare_and_write": false, 00:10:37.293 "abort": true, 00:10:37.293 "seek_hole": false, 00:10:37.293 "seek_data": false, 00:10:37.293 "copy": true, 00:10:37.293 "nvme_iov_md": false 00:10:37.293 }, 00:10:37.293 "memory_domains": [ 00:10:37.293 { 00:10:37.293 "dma_device_id": "system", 00:10:37.293 "dma_device_type": 1 00:10:37.293 }, 00:10:37.293 { 00:10:37.293 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:37.293 "dma_device_type": 2 00:10:37.293 } 00:10:37.293 ], 00:10:37.293 "driver_specific": {} 00:10:37.293 } 00:10:37.293 ] 00:10:37.293 20:06:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.293 20:06:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:37.293 20:06:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:37.293 20:06:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:37.293 20:06:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:37.293 20:06:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:37.293 20:06:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:37.293 20:06:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:37.293 20:06:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:37.293 20:06:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:37.293 20:06:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:37.293 20:06:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:37.293 20:06:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:37.293 20:06:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.293 20:06:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.293 20:06:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.293 20:06:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.293 20:06:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:37.293 "name": "Existed_Raid", 00:10:37.293 "uuid": "c8c99030-9934-4e7c-91fb-9a38913ec5c4", 00:10:37.293 "strip_size_kb": 64, 00:10:37.293 "state": "configuring", 00:10:37.293 "raid_level": "concat", 00:10:37.293 "superblock": true, 00:10:37.293 "num_base_bdevs": 4, 00:10:37.293 "num_base_bdevs_discovered": 1, 00:10:37.293 "num_base_bdevs_operational": 4, 00:10:37.293 "base_bdevs_list": [ 00:10:37.293 { 00:10:37.293 "name": "BaseBdev1", 00:10:37.293 "uuid": "d275492c-d89f-43c2-b678-1dbae1d6b840", 00:10:37.293 "is_configured": true, 00:10:37.293 "data_offset": 2048, 00:10:37.293 "data_size": 63488 00:10:37.293 }, 00:10:37.293 { 00:10:37.293 "name": "BaseBdev2", 00:10:37.293 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:37.293 "is_configured": false, 00:10:37.293 "data_offset": 0, 00:10:37.293 "data_size": 0 00:10:37.293 }, 00:10:37.293 { 00:10:37.293 "name": "BaseBdev3", 00:10:37.293 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:37.293 "is_configured": false, 00:10:37.293 "data_offset": 0, 00:10:37.293 "data_size": 0 00:10:37.293 }, 00:10:37.293 { 00:10:37.293 "name": "BaseBdev4", 00:10:37.293 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:37.293 "is_configured": false, 00:10:37.293 "data_offset": 0, 00:10:37.293 "data_size": 0 00:10:37.293 } 00:10:37.293 ] 00:10:37.293 }' 00:10:37.293 20:06:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:37.293 20:06:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.553 20:06:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:37.553 20:06:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.553 20:06:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.553 [2024-12-08 20:06:09.504409] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:37.553 [2024-12-08 20:06:09.504508] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:37.553 20:06:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.553 20:06:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:37.553 20:06:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.553 20:06:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.553 [2024-12-08 20:06:09.516475] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:37.553 [2024-12-08 20:06:09.518353] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:37.553 [2024-12-08 20:06:09.518400] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:37.553 [2024-12-08 20:06:09.518410] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:37.553 [2024-12-08 20:06:09.518421] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:37.553 [2024-12-08 20:06:09.518428] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:37.553 [2024-12-08 20:06:09.518436] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:37.553 20:06:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.553 20:06:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:37.553 20:06:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:37.553 20:06:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:37.553 20:06:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:37.553 20:06:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:37.553 20:06:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:37.553 20:06:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:37.553 20:06:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:37.553 20:06:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:37.553 20:06:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:37.553 20:06:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:37.553 20:06:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:37.553 20:06:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:37.816 20:06:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.816 20:06:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.816 20:06:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.816 20:06:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.816 20:06:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:37.816 "name": "Existed_Raid", 00:10:37.816 "uuid": "f2206a62-ffbe-4047-b4f1-4d990c977dc7", 00:10:37.816 "strip_size_kb": 64, 00:10:37.816 "state": "configuring", 00:10:37.816 "raid_level": "concat", 00:10:37.816 "superblock": true, 00:10:37.816 "num_base_bdevs": 4, 00:10:37.816 "num_base_bdevs_discovered": 1, 00:10:37.816 "num_base_bdevs_operational": 4, 00:10:37.816 "base_bdevs_list": [ 00:10:37.816 { 00:10:37.816 "name": "BaseBdev1", 00:10:37.816 "uuid": "d275492c-d89f-43c2-b678-1dbae1d6b840", 00:10:37.816 "is_configured": true, 00:10:37.816 "data_offset": 2048, 00:10:37.816 "data_size": 63488 00:10:37.816 }, 00:10:37.816 { 00:10:37.816 "name": "BaseBdev2", 00:10:37.816 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:37.816 "is_configured": false, 00:10:37.816 "data_offset": 0, 00:10:37.816 "data_size": 0 00:10:37.816 }, 00:10:37.816 { 00:10:37.816 "name": "BaseBdev3", 00:10:37.816 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:37.816 "is_configured": false, 00:10:37.816 "data_offset": 0, 00:10:37.816 "data_size": 0 00:10:37.816 }, 00:10:37.816 { 00:10:37.816 "name": "BaseBdev4", 00:10:37.816 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:37.816 "is_configured": false, 00:10:37.816 "data_offset": 0, 00:10:37.816 "data_size": 0 00:10:37.816 } 00:10:37.816 ] 00:10:37.816 }' 00:10:37.816 20:06:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:37.816 20:06:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.075 20:06:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:38.075 20:06:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.075 20:06:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.075 [2024-12-08 20:06:09.992736] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:38.075 BaseBdev2 00:10:38.075 20:06:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.075 20:06:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:38.075 20:06:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:38.075 20:06:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:38.075 20:06:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:38.075 20:06:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:38.075 20:06:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:38.075 20:06:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:38.075 20:06:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.075 20:06:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.075 20:06:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.075 20:06:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:38.075 20:06:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.075 20:06:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.075 [ 00:10:38.075 { 00:10:38.075 "name": "BaseBdev2", 00:10:38.075 "aliases": [ 00:10:38.075 "e49db245-e205-4061-a885-543d55962351" 00:10:38.075 ], 00:10:38.075 "product_name": "Malloc disk", 00:10:38.075 "block_size": 512, 00:10:38.075 "num_blocks": 65536, 00:10:38.075 "uuid": "e49db245-e205-4061-a885-543d55962351", 00:10:38.075 "assigned_rate_limits": { 00:10:38.075 "rw_ios_per_sec": 0, 00:10:38.075 "rw_mbytes_per_sec": 0, 00:10:38.075 "r_mbytes_per_sec": 0, 00:10:38.075 "w_mbytes_per_sec": 0 00:10:38.075 }, 00:10:38.075 "claimed": true, 00:10:38.075 "claim_type": "exclusive_write", 00:10:38.075 "zoned": false, 00:10:38.075 "supported_io_types": { 00:10:38.075 "read": true, 00:10:38.075 "write": true, 00:10:38.075 "unmap": true, 00:10:38.075 "flush": true, 00:10:38.075 "reset": true, 00:10:38.075 "nvme_admin": false, 00:10:38.075 "nvme_io": false, 00:10:38.075 "nvme_io_md": false, 00:10:38.075 "write_zeroes": true, 00:10:38.075 "zcopy": true, 00:10:38.075 "get_zone_info": false, 00:10:38.075 "zone_management": false, 00:10:38.075 "zone_append": false, 00:10:38.075 "compare": false, 00:10:38.075 "compare_and_write": false, 00:10:38.075 "abort": true, 00:10:38.075 "seek_hole": false, 00:10:38.075 "seek_data": false, 00:10:38.075 "copy": true, 00:10:38.075 "nvme_iov_md": false 00:10:38.075 }, 00:10:38.075 "memory_domains": [ 00:10:38.075 { 00:10:38.075 "dma_device_id": "system", 00:10:38.075 "dma_device_type": 1 00:10:38.075 }, 00:10:38.075 { 00:10:38.075 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:38.075 "dma_device_type": 2 00:10:38.075 } 00:10:38.075 ], 00:10:38.075 "driver_specific": {} 00:10:38.075 } 00:10:38.075 ] 00:10:38.075 20:06:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.075 20:06:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:38.075 20:06:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:38.075 20:06:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:38.075 20:06:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:38.075 20:06:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:38.075 20:06:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:38.075 20:06:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:38.075 20:06:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:38.075 20:06:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:38.075 20:06:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:38.075 20:06:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:38.075 20:06:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:38.075 20:06:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:38.075 20:06:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.075 20:06:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.075 20:06:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.075 20:06:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:38.333 20:06:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.333 20:06:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:38.333 "name": "Existed_Raid", 00:10:38.333 "uuid": "f2206a62-ffbe-4047-b4f1-4d990c977dc7", 00:10:38.333 "strip_size_kb": 64, 00:10:38.333 "state": "configuring", 00:10:38.333 "raid_level": "concat", 00:10:38.333 "superblock": true, 00:10:38.333 "num_base_bdevs": 4, 00:10:38.333 "num_base_bdevs_discovered": 2, 00:10:38.333 "num_base_bdevs_operational": 4, 00:10:38.333 "base_bdevs_list": [ 00:10:38.333 { 00:10:38.333 "name": "BaseBdev1", 00:10:38.333 "uuid": "d275492c-d89f-43c2-b678-1dbae1d6b840", 00:10:38.333 "is_configured": true, 00:10:38.333 "data_offset": 2048, 00:10:38.333 "data_size": 63488 00:10:38.333 }, 00:10:38.333 { 00:10:38.333 "name": "BaseBdev2", 00:10:38.333 "uuid": "e49db245-e205-4061-a885-543d55962351", 00:10:38.333 "is_configured": true, 00:10:38.333 "data_offset": 2048, 00:10:38.333 "data_size": 63488 00:10:38.333 }, 00:10:38.333 { 00:10:38.333 "name": "BaseBdev3", 00:10:38.333 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:38.333 "is_configured": false, 00:10:38.333 "data_offset": 0, 00:10:38.333 "data_size": 0 00:10:38.333 }, 00:10:38.333 { 00:10:38.333 "name": "BaseBdev4", 00:10:38.333 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:38.333 "is_configured": false, 00:10:38.333 "data_offset": 0, 00:10:38.333 "data_size": 0 00:10:38.333 } 00:10:38.333 ] 00:10:38.333 }' 00:10:38.333 20:06:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:38.333 20:06:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.591 20:06:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:38.591 20:06:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.591 20:06:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.591 [2024-12-08 20:06:10.521910] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:38.591 BaseBdev3 00:10:38.591 20:06:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.591 20:06:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:38.591 20:06:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:38.591 20:06:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:38.591 20:06:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:38.591 20:06:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:38.591 20:06:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:38.591 20:06:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:38.591 20:06:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.591 20:06:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.591 20:06:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.591 20:06:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:38.591 20:06:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.591 20:06:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.591 [ 00:10:38.591 { 00:10:38.591 "name": "BaseBdev3", 00:10:38.591 "aliases": [ 00:10:38.591 "1b9f5143-592f-4741-885d-e1d5cf2494ac" 00:10:38.591 ], 00:10:38.591 "product_name": "Malloc disk", 00:10:38.591 "block_size": 512, 00:10:38.591 "num_blocks": 65536, 00:10:38.591 "uuid": "1b9f5143-592f-4741-885d-e1d5cf2494ac", 00:10:38.591 "assigned_rate_limits": { 00:10:38.591 "rw_ios_per_sec": 0, 00:10:38.591 "rw_mbytes_per_sec": 0, 00:10:38.591 "r_mbytes_per_sec": 0, 00:10:38.591 "w_mbytes_per_sec": 0 00:10:38.591 }, 00:10:38.591 "claimed": true, 00:10:38.591 "claim_type": "exclusive_write", 00:10:38.591 "zoned": false, 00:10:38.591 "supported_io_types": { 00:10:38.591 "read": true, 00:10:38.591 "write": true, 00:10:38.591 "unmap": true, 00:10:38.591 "flush": true, 00:10:38.591 "reset": true, 00:10:38.591 "nvme_admin": false, 00:10:38.591 "nvme_io": false, 00:10:38.591 "nvme_io_md": false, 00:10:38.591 "write_zeroes": true, 00:10:38.591 "zcopy": true, 00:10:38.591 "get_zone_info": false, 00:10:38.591 "zone_management": false, 00:10:38.591 "zone_append": false, 00:10:38.591 "compare": false, 00:10:38.591 "compare_and_write": false, 00:10:38.591 "abort": true, 00:10:38.591 "seek_hole": false, 00:10:38.591 "seek_data": false, 00:10:38.591 "copy": true, 00:10:38.591 "nvme_iov_md": false 00:10:38.592 }, 00:10:38.592 "memory_domains": [ 00:10:38.592 { 00:10:38.592 "dma_device_id": "system", 00:10:38.592 "dma_device_type": 1 00:10:38.592 }, 00:10:38.592 { 00:10:38.592 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:38.592 "dma_device_type": 2 00:10:38.592 } 00:10:38.592 ], 00:10:38.592 "driver_specific": {} 00:10:38.592 } 00:10:38.592 ] 00:10:38.592 20:06:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.592 20:06:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:38.592 20:06:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:38.592 20:06:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:38.592 20:06:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:38.592 20:06:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:38.592 20:06:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:38.592 20:06:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:38.592 20:06:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:38.592 20:06:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:38.592 20:06:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:38.592 20:06:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:38.592 20:06:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:38.592 20:06:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:38.854 20:06:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.854 20:06:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:38.854 20:06:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.854 20:06:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.854 20:06:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.854 20:06:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:38.854 "name": "Existed_Raid", 00:10:38.854 "uuid": "f2206a62-ffbe-4047-b4f1-4d990c977dc7", 00:10:38.854 "strip_size_kb": 64, 00:10:38.854 "state": "configuring", 00:10:38.854 "raid_level": "concat", 00:10:38.854 "superblock": true, 00:10:38.854 "num_base_bdevs": 4, 00:10:38.854 "num_base_bdevs_discovered": 3, 00:10:38.854 "num_base_bdevs_operational": 4, 00:10:38.854 "base_bdevs_list": [ 00:10:38.854 { 00:10:38.854 "name": "BaseBdev1", 00:10:38.854 "uuid": "d275492c-d89f-43c2-b678-1dbae1d6b840", 00:10:38.854 "is_configured": true, 00:10:38.855 "data_offset": 2048, 00:10:38.855 "data_size": 63488 00:10:38.855 }, 00:10:38.855 { 00:10:38.855 "name": "BaseBdev2", 00:10:38.855 "uuid": "e49db245-e205-4061-a885-543d55962351", 00:10:38.855 "is_configured": true, 00:10:38.855 "data_offset": 2048, 00:10:38.855 "data_size": 63488 00:10:38.855 }, 00:10:38.855 { 00:10:38.855 "name": "BaseBdev3", 00:10:38.855 "uuid": "1b9f5143-592f-4741-885d-e1d5cf2494ac", 00:10:38.855 "is_configured": true, 00:10:38.855 "data_offset": 2048, 00:10:38.855 "data_size": 63488 00:10:38.855 }, 00:10:38.855 { 00:10:38.855 "name": "BaseBdev4", 00:10:38.855 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:38.855 "is_configured": false, 00:10:38.855 "data_offset": 0, 00:10:38.855 "data_size": 0 00:10:38.855 } 00:10:38.855 ] 00:10:38.855 }' 00:10:38.855 20:06:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:38.855 20:06:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.120 20:06:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:39.120 20:06:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.120 20:06:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.120 [2024-12-08 20:06:11.027022] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:39.120 [2024-12-08 20:06:11.027306] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:39.120 [2024-12-08 20:06:11.027327] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:39.120 [2024-12-08 20:06:11.027618] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:39.120 [2024-12-08 20:06:11.027770] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:39.120 [2024-12-08 20:06:11.027786] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:39.120 BaseBdev4 00:10:39.120 [2024-12-08 20:06:11.027993] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:39.120 20:06:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.120 20:06:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:39.120 20:06:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:39.120 20:06:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:39.120 20:06:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:39.120 20:06:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:39.120 20:06:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:39.120 20:06:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:39.120 20:06:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.120 20:06:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.120 20:06:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.120 20:06:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:39.120 20:06:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.120 20:06:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.120 [ 00:10:39.120 { 00:10:39.120 "name": "BaseBdev4", 00:10:39.120 "aliases": [ 00:10:39.120 "63ea3084-1fa7-4718-b665-daa340058a63" 00:10:39.120 ], 00:10:39.120 "product_name": "Malloc disk", 00:10:39.120 "block_size": 512, 00:10:39.120 "num_blocks": 65536, 00:10:39.120 "uuid": "63ea3084-1fa7-4718-b665-daa340058a63", 00:10:39.120 "assigned_rate_limits": { 00:10:39.120 "rw_ios_per_sec": 0, 00:10:39.120 "rw_mbytes_per_sec": 0, 00:10:39.120 "r_mbytes_per_sec": 0, 00:10:39.120 "w_mbytes_per_sec": 0 00:10:39.120 }, 00:10:39.120 "claimed": true, 00:10:39.120 "claim_type": "exclusive_write", 00:10:39.120 "zoned": false, 00:10:39.120 "supported_io_types": { 00:10:39.120 "read": true, 00:10:39.120 "write": true, 00:10:39.120 "unmap": true, 00:10:39.120 "flush": true, 00:10:39.120 "reset": true, 00:10:39.120 "nvme_admin": false, 00:10:39.120 "nvme_io": false, 00:10:39.120 "nvme_io_md": false, 00:10:39.120 "write_zeroes": true, 00:10:39.120 "zcopy": true, 00:10:39.120 "get_zone_info": false, 00:10:39.120 "zone_management": false, 00:10:39.120 "zone_append": false, 00:10:39.120 "compare": false, 00:10:39.120 "compare_and_write": false, 00:10:39.120 "abort": true, 00:10:39.120 "seek_hole": false, 00:10:39.120 "seek_data": false, 00:10:39.120 "copy": true, 00:10:39.120 "nvme_iov_md": false 00:10:39.120 }, 00:10:39.120 "memory_domains": [ 00:10:39.120 { 00:10:39.120 "dma_device_id": "system", 00:10:39.120 "dma_device_type": 1 00:10:39.120 }, 00:10:39.120 { 00:10:39.120 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:39.120 "dma_device_type": 2 00:10:39.120 } 00:10:39.120 ], 00:10:39.120 "driver_specific": {} 00:10:39.120 } 00:10:39.120 ] 00:10:39.120 20:06:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.120 20:06:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:39.120 20:06:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:39.120 20:06:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:39.120 20:06:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:10:39.120 20:06:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:39.120 20:06:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:39.120 20:06:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:39.120 20:06:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:39.120 20:06:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:39.120 20:06:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:39.120 20:06:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:39.120 20:06:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:39.120 20:06:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:39.120 20:06:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:39.120 20:06:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.120 20:06:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.120 20:06:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.120 20:06:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.380 20:06:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:39.380 "name": "Existed_Raid", 00:10:39.380 "uuid": "f2206a62-ffbe-4047-b4f1-4d990c977dc7", 00:10:39.380 "strip_size_kb": 64, 00:10:39.380 "state": "online", 00:10:39.380 "raid_level": "concat", 00:10:39.380 "superblock": true, 00:10:39.380 "num_base_bdevs": 4, 00:10:39.380 "num_base_bdevs_discovered": 4, 00:10:39.380 "num_base_bdevs_operational": 4, 00:10:39.380 "base_bdevs_list": [ 00:10:39.380 { 00:10:39.380 "name": "BaseBdev1", 00:10:39.380 "uuid": "d275492c-d89f-43c2-b678-1dbae1d6b840", 00:10:39.380 "is_configured": true, 00:10:39.380 "data_offset": 2048, 00:10:39.380 "data_size": 63488 00:10:39.380 }, 00:10:39.380 { 00:10:39.380 "name": "BaseBdev2", 00:10:39.380 "uuid": "e49db245-e205-4061-a885-543d55962351", 00:10:39.380 "is_configured": true, 00:10:39.380 "data_offset": 2048, 00:10:39.380 "data_size": 63488 00:10:39.380 }, 00:10:39.380 { 00:10:39.380 "name": "BaseBdev3", 00:10:39.380 "uuid": "1b9f5143-592f-4741-885d-e1d5cf2494ac", 00:10:39.380 "is_configured": true, 00:10:39.380 "data_offset": 2048, 00:10:39.380 "data_size": 63488 00:10:39.380 }, 00:10:39.380 { 00:10:39.380 "name": "BaseBdev4", 00:10:39.380 "uuid": "63ea3084-1fa7-4718-b665-daa340058a63", 00:10:39.380 "is_configured": true, 00:10:39.380 "data_offset": 2048, 00:10:39.380 "data_size": 63488 00:10:39.380 } 00:10:39.380 ] 00:10:39.380 }' 00:10:39.380 20:06:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:39.380 20:06:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.639 20:06:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:39.639 20:06:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:39.639 20:06:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:39.639 20:06:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:39.639 20:06:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:39.639 20:06:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:39.639 20:06:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:39.639 20:06:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:39.639 20:06:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.639 20:06:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.639 [2024-12-08 20:06:11.514551] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:39.639 20:06:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.639 20:06:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:39.639 "name": "Existed_Raid", 00:10:39.639 "aliases": [ 00:10:39.639 "f2206a62-ffbe-4047-b4f1-4d990c977dc7" 00:10:39.639 ], 00:10:39.639 "product_name": "Raid Volume", 00:10:39.639 "block_size": 512, 00:10:39.639 "num_blocks": 253952, 00:10:39.639 "uuid": "f2206a62-ffbe-4047-b4f1-4d990c977dc7", 00:10:39.639 "assigned_rate_limits": { 00:10:39.639 "rw_ios_per_sec": 0, 00:10:39.639 "rw_mbytes_per_sec": 0, 00:10:39.639 "r_mbytes_per_sec": 0, 00:10:39.639 "w_mbytes_per_sec": 0 00:10:39.639 }, 00:10:39.639 "claimed": false, 00:10:39.639 "zoned": false, 00:10:39.639 "supported_io_types": { 00:10:39.640 "read": true, 00:10:39.640 "write": true, 00:10:39.640 "unmap": true, 00:10:39.640 "flush": true, 00:10:39.640 "reset": true, 00:10:39.640 "nvme_admin": false, 00:10:39.640 "nvme_io": false, 00:10:39.640 "nvme_io_md": false, 00:10:39.640 "write_zeroes": true, 00:10:39.640 "zcopy": false, 00:10:39.640 "get_zone_info": false, 00:10:39.640 "zone_management": false, 00:10:39.640 "zone_append": false, 00:10:39.640 "compare": false, 00:10:39.640 "compare_and_write": false, 00:10:39.640 "abort": false, 00:10:39.640 "seek_hole": false, 00:10:39.640 "seek_data": false, 00:10:39.640 "copy": false, 00:10:39.640 "nvme_iov_md": false 00:10:39.640 }, 00:10:39.640 "memory_domains": [ 00:10:39.640 { 00:10:39.640 "dma_device_id": "system", 00:10:39.640 "dma_device_type": 1 00:10:39.640 }, 00:10:39.640 { 00:10:39.640 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:39.640 "dma_device_type": 2 00:10:39.640 }, 00:10:39.640 { 00:10:39.640 "dma_device_id": "system", 00:10:39.640 "dma_device_type": 1 00:10:39.640 }, 00:10:39.640 { 00:10:39.640 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:39.640 "dma_device_type": 2 00:10:39.640 }, 00:10:39.640 { 00:10:39.640 "dma_device_id": "system", 00:10:39.640 "dma_device_type": 1 00:10:39.640 }, 00:10:39.640 { 00:10:39.640 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:39.640 "dma_device_type": 2 00:10:39.640 }, 00:10:39.640 { 00:10:39.640 "dma_device_id": "system", 00:10:39.640 "dma_device_type": 1 00:10:39.640 }, 00:10:39.640 { 00:10:39.640 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:39.640 "dma_device_type": 2 00:10:39.640 } 00:10:39.640 ], 00:10:39.640 "driver_specific": { 00:10:39.640 "raid": { 00:10:39.640 "uuid": "f2206a62-ffbe-4047-b4f1-4d990c977dc7", 00:10:39.640 "strip_size_kb": 64, 00:10:39.640 "state": "online", 00:10:39.640 "raid_level": "concat", 00:10:39.640 "superblock": true, 00:10:39.640 "num_base_bdevs": 4, 00:10:39.640 "num_base_bdevs_discovered": 4, 00:10:39.640 "num_base_bdevs_operational": 4, 00:10:39.640 "base_bdevs_list": [ 00:10:39.640 { 00:10:39.640 "name": "BaseBdev1", 00:10:39.640 "uuid": "d275492c-d89f-43c2-b678-1dbae1d6b840", 00:10:39.640 "is_configured": true, 00:10:39.640 "data_offset": 2048, 00:10:39.640 "data_size": 63488 00:10:39.640 }, 00:10:39.640 { 00:10:39.640 "name": "BaseBdev2", 00:10:39.640 "uuid": "e49db245-e205-4061-a885-543d55962351", 00:10:39.640 "is_configured": true, 00:10:39.640 "data_offset": 2048, 00:10:39.640 "data_size": 63488 00:10:39.640 }, 00:10:39.640 { 00:10:39.640 "name": "BaseBdev3", 00:10:39.640 "uuid": "1b9f5143-592f-4741-885d-e1d5cf2494ac", 00:10:39.640 "is_configured": true, 00:10:39.640 "data_offset": 2048, 00:10:39.640 "data_size": 63488 00:10:39.640 }, 00:10:39.640 { 00:10:39.640 "name": "BaseBdev4", 00:10:39.640 "uuid": "63ea3084-1fa7-4718-b665-daa340058a63", 00:10:39.640 "is_configured": true, 00:10:39.640 "data_offset": 2048, 00:10:39.640 "data_size": 63488 00:10:39.640 } 00:10:39.640 ] 00:10:39.640 } 00:10:39.640 } 00:10:39.640 }' 00:10:39.640 20:06:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:39.640 20:06:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:39.640 BaseBdev2 00:10:39.640 BaseBdev3 00:10:39.640 BaseBdev4' 00:10:39.640 20:06:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:39.900 20:06:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:39.900 20:06:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:39.900 20:06:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:39.900 20:06:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.900 20:06:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.900 20:06:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:39.900 20:06:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.900 20:06:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:39.900 20:06:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:39.900 20:06:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:39.900 20:06:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:39.900 20:06:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:39.900 20:06:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.900 20:06:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.900 20:06:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.900 20:06:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:39.900 20:06:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:39.900 20:06:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:39.900 20:06:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:39.900 20:06:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.900 20:06:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:39.900 20:06:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.900 20:06:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.900 20:06:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:39.900 20:06:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:39.900 20:06:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:39.900 20:06:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:39.900 20:06:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.900 20:06:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.900 20:06:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:39.900 20:06:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.900 20:06:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:39.900 20:06:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:39.900 20:06:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:39.900 20:06:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.900 20:06:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.900 [2024-12-08 20:06:11.825749] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:39.900 [2024-12-08 20:06:11.825782] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:39.900 [2024-12-08 20:06:11.825834] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:40.159 20:06:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.159 20:06:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:40.159 20:06:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:10:40.159 20:06:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:40.159 20:06:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:10:40.159 20:06:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:40.159 20:06:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:10:40.159 20:06:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:40.159 20:06:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:40.159 20:06:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:40.159 20:06:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:40.159 20:06:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:40.159 20:06:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:40.159 20:06:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:40.159 20:06:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:40.159 20:06:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:40.159 20:06:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.159 20:06:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.159 20:06:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.159 20:06:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:40.159 20:06:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.159 20:06:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:40.159 "name": "Existed_Raid", 00:10:40.159 "uuid": "f2206a62-ffbe-4047-b4f1-4d990c977dc7", 00:10:40.159 "strip_size_kb": 64, 00:10:40.159 "state": "offline", 00:10:40.159 "raid_level": "concat", 00:10:40.159 "superblock": true, 00:10:40.159 "num_base_bdevs": 4, 00:10:40.159 "num_base_bdevs_discovered": 3, 00:10:40.159 "num_base_bdevs_operational": 3, 00:10:40.159 "base_bdevs_list": [ 00:10:40.159 { 00:10:40.159 "name": null, 00:10:40.159 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:40.159 "is_configured": false, 00:10:40.159 "data_offset": 0, 00:10:40.159 "data_size": 63488 00:10:40.159 }, 00:10:40.159 { 00:10:40.159 "name": "BaseBdev2", 00:10:40.159 "uuid": "e49db245-e205-4061-a885-543d55962351", 00:10:40.159 "is_configured": true, 00:10:40.159 "data_offset": 2048, 00:10:40.159 "data_size": 63488 00:10:40.159 }, 00:10:40.159 { 00:10:40.159 "name": "BaseBdev3", 00:10:40.159 "uuid": "1b9f5143-592f-4741-885d-e1d5cf2494ac", 00:10:40.159 "is_configured": true, 00:10:40.159 "data_offset": 2048, 00:10:40.159 "data_size": 63488 00:10:40.159 }, 00:10:40.159 { 00:10:40.159 "name": "BaseBdev4", 00:10:40.159 "uuid": "63ea3084-1fa7-4718-b665-daa340058a63", 00:10:40.159 "is_configured": true, 00:10:40.159 "data_offset": 2048, 00:10:40.159 "data_size": 63488 00:10:40.159 } 00:10:40.159 ] 00:10:40.159 }' 00:10:40.159 20:06:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:40.159 20:06:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.418 20:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:40.418 20:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:40.418 20:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.418 20:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:40.418 20:06:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.418 20:06:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.418 20:06:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.676 20:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:40.676 20:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:40.676 20:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:40.676 20:06:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.676 20:06:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.676 [2024-12-08 20:06:12.426429] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:40.676 20:06:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.676 20:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:40.676 20:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:40.676 20:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.676 20:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:40.676 20:06:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.676 20:06:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.676 20:06:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.676 20:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:40.676 20:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:40.676 20:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:40.676 20:06:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.676 20:06:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.676 [2024-12-08 20:06:12.579755] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:40.935 20:06:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.935 20:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:40.935 20:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:40.935 20:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.935 20:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:40.935 20:06:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.935 20:06:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.935 20:06:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.935 20:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:40.935 20:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:40.935 20:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:40.935 20:06:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.935 20:06:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.935 [2024-12-08 20:06:12.728865] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:40.935 [2024-12-08 20:06:12.728915] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:40.935 20:06:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.935 20:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:40.935 20:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:40.935 20:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.935 20:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:40.935 20:06:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.935 20:06:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.935 20:06:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.935 20:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:40.935 20:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:40.935 20:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:40.935 20:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:40.935 20:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:40.935 20:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:40.935 20:06:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.935 20:06:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.935 BaseBdev2 00:10:41.195 20:06:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.195 20:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:41.195 20:06:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:41.195 20:06:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:41.195 20:06:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:41.195 20:06:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:41.195 20:06:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:41.195 20:06:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:41.195 20:06:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.195 20:06:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.195 20:06:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.195 20:06:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:41.195 20:06:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.195 20:06:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.195 [ 00:10:41.195 { 00:10:41.195 "name": "BaseBdev2", 00:10:41.195 "aliases": [ 00:10:41.195 "c0f02614-0abc-4c8d-9643-095a79e64f23" 00:10:41.195 ], 00:10:41.195 "product_name": "Malloc disk", 00:10:41.195 "block_size": 512, 00:10:41.195 "num_blocks": 65536, 00:10:41.195 "uuid": "c0f02614-0abc-4c8d-9643-095a79e64f23", 00:10:41.195 "assigned_rate_limits": { 00:10:41.195 "rw_ios_per_sec": 0, 00:10:41.195 "rw_mbytes_per_sec": 0, 00:10:41.195 "r_mbytes_per_sec": 0, 00:10:41.195 "w_mbytes_per_sec": 0 00:10:41.195 }, 00:10:41.195 "claimed": false, 00:10:41.195 "zoned": false, 00:10:41.195 "supported_io_types": { 00:10:41.195 "read": true, 00:10:41.195 "write": true, 00:10:41.195 "unmap": true, 00:10:41.195 "flush": true, 00:10:41.195 "reset": true, 00:10:41.195 "nvme_admin": false, 00:10:41.195 "nvme_io": false, 00:10:41.195 "nvme_io_md": false, 00:10:41.195 "write_zeroes": true, 00:10:41.195 "zcopy": true, 00:10:41.195 "get_zone_info": false, 00:10:41.195 "zone_management": false, 00:10:41.195 "zone_append": false, 00:10:41.195 "compare": false, 00:10:41.195 "compare_and_write": false, 00:10:41.195 "abort": true, 00:10:41.195 "seek_hole": false, 00:10:41.195 "seek_data": false, 00:10:41.195 "copy": true, 00:10:41.195 "nvme_iov_md": false 00:10:41.195 }, 00:10:41.195 "memory_domains": [ 00:10:41.195 { 00:10:41.195 "dma_device_id": "system", 00:10:41.195 "dma_device_type": 1 00:10:41.195 }, 00:10:41.195 { 00:10:41.195 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:41.195 "dma_device_type": 2 00:10:41.195 } 00:10:41.195 ], 00:10:41.195 "driver_specific": {} 00:10:41.195 } 00:10:41.195 ] 00:10:41.195 20:06:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.195 20:06:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:41.195 20:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:41.195 20:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:41.195 20:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:41.195 20:06:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.195 20:06:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.195 BaseBdev3 00:10:41.195 20:06:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.195 20:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:41.195 20:06:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:41.195 20:06:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:41.195 20:06:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:41.195 20:06:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:41.195 20:06:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:41.195 20:06:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:41.195 20:06:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.195 20:06:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.195 20:06:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.196 20:06:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:41.196 20:06:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.196 20:06:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.196 [ 00:10:41.196 { 00:10:41.196 "name": "BaseBdev3", 00:10:41.196 "aliases": [ 00:10:41.196 "974691c7-dcfc-4096-a58d-4ea1d1ce1821" 00:10:41.196 ], 00:10:41.196 "product_name": "Malloc disk", 00:10:41.196 "block_size": 512, 00:10:41.196 "num_blocks": 65536, 00:10:41.196 "uuid": "974691c7-dcfc-4096-a58d-4ea1d1ce1821", 00:10:41.196 "assigned_rate_limits": { 00:10:41.196 "rw_ios_per_sec": 0, 00:10:41.196 "rw_mbytes_per_sec": 0, 00:10:41.196 "r_mbytes_per_sec": 0, 00:10:41.196 "w_mbytes_per_sec": 0 00:10:41.196 }, 00:10:41.196 "claimed": false, 00:10:41.196 "zoned": false, 00:10:41.196 "supported_io_types": { 00:10:41.196 "read": true, 00:10:41.196 "write": true, 00:10:41.196 "unmap": true, 00:10:41.196 "flush": true, 00:10:41.196 "reset": true, 00:10:41.196 "nvme_admin": false, 00:10:41.196 "nvme_io": false, 00:10:41.196 "nvme_io_md": false, 00:10:41.196 "write_zeroes": true, 00:10:41.196 "zcopy": true, 00:10:41.196 "get_zone_info": false, 00:10:41.196 "zone_management": false, 00:10:41.196 "zone_append": false, 00:10:41.196 "compare": false, 00:10:41.196 "compare_and_write": false, 00:10:41.196 "abort": true, 00:10:41.196 "seek_hole": false, 00:10:41.196 "seek_data": false, 00:10:41.196 "copy": true, 00:10:41.196 "nvme_iov_md": false 00:10:41.196 }, 00:10:41.196 "memory_domains": [ 00:10:41.196 { 00:10:41.196 "dma_device_id": "system", 00:10:41.196 "dma_device_type": 1 00:10:41.196 }, 00:10:41.196 { 00:10:41.196 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:41.196 "dma_device_type": 2 00:10:41.196 } 00:10:41.196 ], 00:10:41.196 "driver_specific": {} 00:10:41.196 } 00:10:41.196 ] 00:10:41.196 20:06:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.196 20:06:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:41.196 20:06:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:41.196 20:06:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:41.196 20:06:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:41.196 20:06:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.196 20:06:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.196 BaseBdev4 00:10:41.196 20:06:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.196 20:06:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:41.196 20:06:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:41.196 20:06:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:41.196 20:06:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:41.196 20:06:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:41.196 20:06:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:41.196 20:06:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:41.196 20:06:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.196 20:06:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.196 20:06:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.196 20:06:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:41.196 20:06:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.196 20:06:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.196 [ 00:10:41.196 { 00:10:41.196 "name": "BaseBdev4", 00:10:41.196 "aliases": [ 00:10:41.196 "9458d516-18ad-4c7f-b9ca-c54cc269e623" 00:10:41.196 ], 00:10:41.196 "product_name": "Malloc disk", 00:10:41.196 "block_size": 512, 00:10:41.196 "num_blocks": 65536, 00:10:41.196 "uuid": "9458d516-18ad-4c7f-b9ca-c54cc269e623", 00:10:41.196 "assigned_rate_limits": { 00:10:41.196 "rw_ios_per_sec": 0, 00:10:41.196 "rw_mbytes_per_sec": 0, 00:10:41.196 "r_mbytes_per_sec": 0, 00:10:41.196 "w_mbytes_per_sec": 0 00:10:41.196 }, 00:10:41.196 "claimed": false, 00:10:41.196 "zoned": false, 00:10:41.196 "supported_io_types": { 00:10:41.196 "read": true, 00:10:41.196 "write": true, 00:10:41.196 "unmap": true, 00:10:41.196 "flush": true, 00:10:41.196 "reset": true, 00:10:41.196 "nvme_admin": false, 00:10:41.196 "nvme_io": false, 00:10:41.196 "nvme_io_md": false, 00:10:41.196 "write_zeroes": true, 00:10:41.196 "zcopy": true, 00:10:41.196 "get_zone_info": false, 00:10:41.196 "zone_management": false, 00:10:41.196 "zone_append": false, 00:10:41.196 "compare": false, 00:10:41.196 "compare_and_write": false, 00:10:41.196 "abort": true, 00:10:41.196 "seek_hole": false, 00:10:41.196 "seek_data": false, 00:10:41.196 "copy": true, 00:10:41.196 "nvme_iov_md": false 00:10:41.196 }, 00:10:41.196 "memory_domains": [ 00:10:41.196 { 00:10:41.196 "dma_device_id": "system", 00:10:41.196 "dma_device_type": 1 00:10:41.196 }, 00:10:41.196 { 00:10:41.196 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:41.196 "dma_device_type": 2 00:10:41.196 } 00:10:41.196 ], 00:10:41.196 "driver_specific": {} 00:10:41.196 } 00:10:41.196 ] 00:10:41.196 20:06:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.196 20:06:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:41.196 20:06:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:41.196 20:06:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:41.196 20:06:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:41.196 20:06:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.196 20:06:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.196 [2024-12-08 20:06:13.116668] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:41.196 [2024-12-08 20:06:13.116748] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:41.196 [2024-12-08 20:06:13.116805] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:41.196 [2024-12-08 20:06:13.118572] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:41.196 [2024-12-08 20:06:13.118669] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:41.196 20:06:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.196 20:06:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:41.196 20:06:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:41.196 20:06:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:41.196 20:06:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:41.196 20:06:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:41.196 20:06:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:41.196 20:06:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:41.196 20:06:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:41.196 20:06:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:41.196 20:06:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:41.196 20:06:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.196 20:06:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:41.196 20:06:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.196 20:06:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.196 20:06:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.455 20:06:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:41.455 "name": "Existed_Raid", 00:10:41.455 "uuid": "77cc6ae0-46bf-4726-b2b8-94dc7456dc7b", 00:10:41.455 "strip_size_kb": 64, 00:10:41.455 "state": "configuring", 00:10:41.455 "raid_level": "concat", 00:10:41.455 "superblock": true, 00:10:41.455 "num_base_bdevs": 4, 00:10:41.455 "num_base_bdevs_discovered": 3, 00:10:41.455 "num_base_bdevs_operational": 4, 00:10:41.455 "base_bdevs_list": [ 00:10:41.455 { 00:10:41.455 "name": "BaseBdev1", 00:10:41.455 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:41.455 "is_configured": false, 00:10:41.455 "data_offset": 0, 00:10:41.455 "data_size": 0 00:10:41.455 }, 00:10:41.455 { 00:10:41.455 "name": "BaseBdev2", 00:10:41.455 "uuid": "c0f02614-0abc-4c8d-9643-095a79e64f23", 00:10:41.455 "is_configured": true, 00:10:41.455 "data_offset": 2048, 00:10:41.455 "data_size": 63488 00:10:41.455 }, 00:10:41.455 { 00:10:41.455 "name": "BaseBdev3", 00:10:41.455 "uuid": "974691c7-dcfc-4096-a58d-4ea1d1ce1821", 00:10:41.455 "is_configured": true, 00:10:41.455 "data_offset": 2048, 00:10:41.455 "data_size": 63488 00:10:41.455 }, 00:10:41.455 { 00:10:41.455 "name": "BaseBdev4", 00:10:41.455 "uuid": "9458d516-18ad-4c7f-b9ca-c54cc269e623", 00:10:41.455 "is_configured": true, 00:10:41.455 "data_offset": 2048, 00:10:41.455 "data_size": 63488 00:10:41.455 } 00:10:41.455 ] 00:10:41.455 }' 00:10:41.455 20:06:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:41.455 20:06:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.714 20:06:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:41.714 20:06:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.714 20:06:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.714 [2024-12-08 20:06:13.539930] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:41.714 20:06:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.714 20:06:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:41.714 20:06:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:41.714 20:06:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:41.714 20:06:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:41.714 20:06:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:41.714 20:06:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:41.714 20:06:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:41.714 20:06:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:41.714 20:06:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:41.714 20:06:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:41.714 20:06:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.714 20:06:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:41.714 20:06:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.714 20:06:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.714 20:06:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.714 20:06:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:41.714 "name": "Existed_Raid", 00:10:41.714 "uuid": "77cc6ae0-46bf-4726-b2b8-94dc7456dc7b", 00:10:41.714 "strip_size_kb": 64, 00:10:41.714 "state": "configuring", 00:10:41.714 "raid_level": "concat", 00:10:41.714 "superblock": true, 00:10:41.714 "num_base_bdevs": 4, 00:10:41.714 "num_base_bdevs_discovered": 2, 00:10:41.714 "num_base_bdevs_operational": 4, 00:10:41.714 "base_bdevs_list": [ 00:10:41.714 { 00:10:41.714 "name": "BaseBdev1", 00:10:41.714 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:41.714 "is_configured": false, 00:10:41.714 "data_offset": 0, 00:10:41.714 "data_size": 0 00:10:41.714 }, 00:10:41.714 { 00:10:41.714 "name": null, 00:10:41.714 "uuid": "c0f02614-0abc-4c8d-9643-095a79e64f23", 00:10:41.714 "is_configured": false, 00:10:41.714 "data_offset": 0, 00:10:41.714 "data_size": 63488 00:10:41.714 }, 00:10:41.714 { 00:10:41.714 "name": "BaseBdev3", 00:10:41.714 "uuid": "974691c7-dcfc-4096-a58d-4ea1d1ce1821", 00:10:41.714 "is_configured": true, 00:10:41.714 "data_offset": 2048, 00:10:41.714 "data_size": 63488 00:10:41.714 }, 00:10:41.714 { 00:10:41.714 "name": "BaseBdev4", 00:10:41.715 "uuid": "9458d516-18ad-4c7f-b9ca-c54cc269e623", 00:10:41.715 "is_configured": true, 00:10:41.715 "data_offset": 2048, 00:10:41.715 "data_size": 63488 00:10:41.715 } 00:10:41.715 ] 00:10:41.715 }' 00:10:41.715 20:06:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:41.715 20:06:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.283 20:06:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:42.283 20:06:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:42.283 20:06:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.283 20:06:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.283 20:06:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.283 20:06:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:42.283 20:06:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:42.283 20:06:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.283 20:06:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.284 [2024-12-08 20:06:14.059754] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:42.284 BaseBdev1 00:10:42.284 20:06:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.284 20:06:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:42.284 20:06:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:42.284 20:06:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:42.284 20:06:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:42.284 20:06:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:42.284 20:06:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:42.284 20:06:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:42.284 20:06:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.284 20:06:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.284 20:06:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.284 20:06:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:42.284 20:06:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.284 20:06:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.284 [ 00:10:42.284 { 00:10:42.284 "name": "BaseBdev1", 00:10:42.284 "aliases": [ 00:10:42.284 "4ca1e6d5-935e-4f7b-844e-acc3fafbacd7" 00:10:42.284 ], 00:10:42.284 "product_name": "Malloc disk", 00:10:42.284 "block_size": 512, 00:10:42.284 "num_blocks": 65536, 00:10:42.284 "uuid": "4ca1e6d5-935e-4f7b-844e-acc3fafbacd7", 00:10:42.284 "assigned_rate_limits": { 00:10:42.284 "rw_ios_per_sec": 0, 00:10:42.284 "rw_mbytes_per_sec": 0, 00:10:42.284 "r_mbytes_per_sec": 0, 00:10:42.284 "w_mbytes_per_sec": 0 00:10:42.284 }, 00:10:42.284 "claimed": true, 00:10:42.284 "claim_type": "exclusive_write", 00:10:42.284 "zoned": false, 00:10:42.284 "supported_io_types": { 00:10:42.284 "read": true, 00:10:42.284 "write": true, 00:10:42.284 "unmap": true, 00:10:42.284 "flush": true, 00:10:42.284 "reset": true, 00:10:42.284 "nvme_admin": false, 00:10:42.284 "nvme_io": false, 00:10:42.284 "nvme_io_md": false, 00:10:42.284 "write_zeroes": true, 00:10:42.284 "zcopy": true, 00:10:42.284 "get_zone_info": false, 00:10:42.284 "zone_management": false, 00:10:42.284 "zone_append": false, 00:10:42.284 "compare": false, 00:10:42.284 "compare_and_write": false, 00:10:42.284 "abort": true, 00:10:42.284 "seek_hole": false, 00:10:42.284 "seek_data": false, 00:10:42.284 "copy": true, 00:10:42.284 "nvme_iov_md": false 00:10:42.284 }, 00:10:42.284 "memory_domains": [ 00:10:42.284 { 00:10:42.284 "dma_device_id": "system", 00:10:42.284 "dma_device_type": 1 00:10:42.284 }, 00:10:42.284 { 00:10:42.284 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:42.284 "dma_device_type": 2 00:10:42.284 } 00:10:42.284 ], 00:10:42.284 "driver_specific": {} 00:10:42.284 } 00:10:42.284 ] 00:10:42.284 20:06:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.284 20:06:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:42.284 20:06:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:42.284 20:06:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:42.284 20:06:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:42.284 20:06:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:42.284 20:06:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:42.284 20:06:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:42.284 20:06:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:42.284 20:06:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:42.284 20:06:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:42.284 20:06:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:42.284 20:06:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:42.284 20:06:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:42.284 20:06:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.284 20:06:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.284 20:06:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.284 20:06:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:42.284 "name": "Existed_Raid", 00:10:42.284 "uuid": "77cc6ae0-46bf-4726-b2b8-94dc7456dc7b", 00:10:42.284 "strip_size_kb": 64, 00:10:42.284 "state": "configuring", 00:10:42.284 "raid_level": "concat", 00:10:42.284 "superblock": true, 00:10:42.284 "num_base_bdevs": 4, 00:10:42.284 "num_base_bdevs_discovered": 3, 00:10:42.284 "num_base_bdevs_operational": 4, 00:10:42.284 "base_bdevs_list": [ 00:10:42.284 { 00:10:42.284 "name": "BaseBdev1", 00:10:42.284 "uuid": "4ca1e6d5-935e-4f7b-844e-acc3fafbacd7", 00:10:42.284 "is_configured": true, 00:10:42.284 "data_offset": 2048, 00:10:42.284 "data_size": 63488 00:10:42.284 }, 00:10:42.284 { 00:10:42.284 "name": null, 00:10:42.284 "uuid": "c0f02614-0abc-4c8d-9643-095a79e64f23", 00:10:42.284 "is_configured": false, 00:10:42.284 "data_offset": 0, 00:10:42.284 "data_size": 63488 00:10:42.284 }, 00:10:42.284 { 00:10:42.284 "name": "BaseBdev3", 00:10:42.284 "uuid": "974691c7-dcfc-4096-a58d-4ea1d1ce1821", 00:10:42.284 "is_configured": true, 00:10:42.284 "data_offset": 2048, 00:10:42.284 "data_size": 63488 00:10:42.284 }, 00:10:42.284 { 00:10:42.284 "name": "BaseBdev4", 00:10:42.284 "uuid": "9458d516-18ad-4c7f-b9ca-c54cc269e623", 00:10:42.284 "is_configured": true, 00:10:42.284 "data_offset": 2048, 00:10:42.284 "data_size": 63488 00:10:42.284 } 00:10:42.284 ] 00:10:42.284 }' 00:10:42.284 20:06:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:42.284 20:06:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.853 20:06:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:42.853 20:06:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:42.853 20:06:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.853 20:06:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.853 20:06:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.853 20:06:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:42.853 20:06:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:42.853 20:06:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.853 20:06:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.853 [2024-12-08 20:06:14.575006] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:42.853 20:06:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.853 20:06:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:42.853 20:06:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:42.853 20:06:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:42.853 20:06:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:42.853 20:06:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:42.853 20:06:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:42.853 20:06:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:42.853 20:06:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:42.853 20:06:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:42.853 20:06:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:42.853 20:06:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:42.853 20:06:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:42.853 20:06:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.853 20:06:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.853 20:06:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.853 20:06:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:42.853 "name": "Existed_Raid", 00:10:42.853 "uuid": "77cc6ae0-46bf-4726-b2b8-94dc7456dc7b", 00:10:42.853 "strip_size_kb": 64, 00:10:42.853 "state": "configuring", 00:10:42.853 "raid_level": "concat", 00:10:42.853 "superblock": true, 00:10:42.853 "num_base_bdevs": 4, 00:10:42.853 "num_base_bdevs_discovered": 2, 00:10:42.853 "num_base_bdevs_operational": 4, 00:10:42.853 "base_bdevs_list": [ 00:10:42.853 { 00:10:42.853 "name": "BaseBdev1", 00:10:42.853 "uuid": "4ca1e6d5-935e-4f7b-844e-acc3fafbacd7", 00:10:42.853 "is_configured": true, 00:10:42.853 "data_offset": 2048, 00:10:42.853 "data_size": 63488 00:10:42.853 }, 00:10:42.853 { 00:10:42.853 "name": null, 00:10:42.853 "uuid": "c0f02614-0abc-4c8d-9643-095a79e64f23", 00:10:42.853 "is_configured": false, 00:10:42.853 "data_offset": 0, 00:10:42.853 "data_size": 63488 00:10:42.853 }, 00:10:42.853 { 00:10:42.853 "name": null, 00:10:42.853 "uuid": "974691c7-dcfc-4096-a58d-4ea1d1ce1821", 00:10:42.853 "is_configured": false, 00:10:42.853 "data_offset": 0, 00:10:42.853 "data_size": 63488 00:10:42.853 }, 00:10:42.853 { 00:10:42.853 "name": "BaseBdev4", 00:10:42.853 "uuid": "9458d516-18ad-4c7f-b9ca-c54cc269e623", 00:10:42.853 "is_configured": true, 00:10:42.853 "data_offset": 2048, 00:10:42.853 "data_size": 63488 00:10:42.853 } 00:10:42.853 ] 00:10:42.853 }' 00:10:42.853 20:06:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:42.853 20:06:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.112 20:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:43.112 20:06:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.112 20:06:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.112 20:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:43.112 20:06:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.112 20:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:43.112 20:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:43.112 20:06:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.112 20:06:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.370 [2024-12-08 20:06:15.094103] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:43.370 20:06:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.370 20:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:43.370 20:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:43.370 20:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:43.370 20:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:43.370 20:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:43.370 20:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:43.370 20:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:43.370 20:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:43.370 20:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:43.370 20:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:43.370 20:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:43.370 20:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:43.370 20:06:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.370 20:06:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.370 20:06:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.370 20:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:43.370 "name": "Existed_Raid", 00:10:43.370 "uuid": "77cc6ae0-46bf-4726-b2b8-94dc7456dc7b", 00:10:43.370 "strip_size_kb": 64, 00:10:43.370 "state": "configuring", 00:10:43.370 "raid_level": "concat", 00:10:43.370 "superblock": true, 00:10:43.370 "num_base_bdevs": 4, 00:10:43.370 "num_base_bdevs_discovered": 3, 00:10:43.370 "num_base_bdevs_operational": 4, 00:10:43.370 "base_bdevs_list": [ 00:10:43.370 { 00:10:43.370 "name": "BaseBdev1", 00:10:43.370 "uuid": "4ca1e6d5-935e-4f7b-844e-acc3fafbacd7", 00:10:43.370 "is_configured": true, 00:10:43.370 "data_offset": 2048, 00:10:43.370 "data_size": 63488 00:10:43.370 }, 00:10:43.370 { 00:10:43.370 "name": null, 00:10:43.370 "uuid": "c0f02614-0abc-4c8d-9643-095a79e64f23", 00:10:43.370 "is_configured": false, 00:10:43.370 "data_offset": 0, 00:10:43.370 "data_size": 63488 00:10:43.370 }, 00:10:43.370 { 00:10:43.370 "name": "BaseBdev3", 00:10:43.370 "uuid": "974691c7-dcfc-4096-a58d-4ea1d1ce1821", 00:10:43.370 "is_configured": true, 00:10:43.370 "data_offset": 2048, 00:10:43.370 "data_size": 63488 00:10:43.370 }, 00:10:43.370 { 00:10:43.370 "name": "BaseBdev4", 00:10:43.370 "uuid": "9458d516-18ad-4c7f-b9ca-c54cc269e623", 00:10:43.370 "is_configured": true, 00:10:43.370 "data_offset": 2048, 00:10:43.370 "data_size": 63488 00:10:43.370 } 00:10:43.370 ] 00:10:43.370 }' 00:10:43.370 20:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:43.370 20:06:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.628 20:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:43.628 20:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:43.628 20:06:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.628 20:06:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.628 20:06:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.888 20:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:43.888 20:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:43.888 20:06:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.888 20:06:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.888 [2024-12-08 20:06:15.625217] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:43.888 20:06:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.888 20:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:43.888 20:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:43.888 20:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:43.888 20:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:43.888 20:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:43.888 20:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:43.888 20:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:43.888 20:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:43.888 20:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:43.888 20:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:43.888 20:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:43.888 20:06:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.888 20:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:43.888 20:06:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.888 20:06:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.888 20:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:43.888 "name": "Existed_Raid", 00:10:43.888 "uuid": "77cc6ae0-46bf-4726-b2b8-94dc7456dc7b", 00:10:43.888 "strip_size_kb": 64, 00:10:43.888 "state": "configuring", 00:10:43.888 "raid_level": "concat", 00:10:43.888 "superblock": true, 00:10:43.888 "num_base_bdevs": 4, 00:10:43.888 "num_base_bdevs_discovered": 2, 00:10:43.888 "num_base_bdevs_operational": 4, 00:10:43.888 "base_bdevs_list": [ 00:10:43.888 { 00:10:43.888 "name": null, 00:10:43.888 "uuid": "4ca1e6d5-935e-4f7b-844e-acc3fafbacd7", 00:10:43.888 "is_configured": false, 00:10:43.888 "data_offset": 0, 00:10:43.888 "data_size": 63488 00:10:43.888 }, 00:10:43.888 { 00:10:43.888 "name": null, 00:10:43.888 "uuid": "c0f02614-0abc-4c8d-9643-095a79e64f23", 00:10:43.888 "is_configured": false, 00:10:43.888 "data_offset": 0, 00:10:43.888 "data_size": 63488 00:10:43.888 }, 00:10:43.888 { 00:10:43.888 "name": "BaseBdev3", 00:10:43.888 "uuid": "974691c7-dcfc-4096-a58d-4ea1d1ce1821", 00:10:43.888 "is_configured": true, 00:10:43.888 "data_offset": 2048, 00:10:43.888 "data_size": 63488 00:10:43.888 }, 00:10:43.888 { 00:10:43.888 "name": "BaseBdev4", 00:10:43.888 "uuid": "9458d516-18ad-4c7f-b9ca-c54cc269e623", 00:10:43.888 "is_configured": true, 00:10:43.888 "data_offset": 2048, 00:10:43.889 "data_size": 63488 00:10:43.889 } 00:10:43.889 ] 00:10:43.889 }' 00:10:43.889 20:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:43.889 20:06:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.468 20:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.468 20:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:44.468 20:06:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.468 20:06:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.468 20:06:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.468 20:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:44.468 20:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:44.468 20:06:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.468 20:06:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.468 [2024-12-08 20:06:16.225208] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:44.468 20:06:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.468 20:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:44.468 20:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:44.468 20:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:44.468 20:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:44.468 20:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:44.468 20:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:44.468 20:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:44.468 20:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:44.468 20:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:44.468 20:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:44.468 20:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.468 20:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:44.468 20:06:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.468 20:06:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.468 20:06:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.468 20:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:44.468 "name": "Existed_Raid", 00:10:44.468 "uuid": "77cc6ae0-46bf-4726-b2b8-94dc7456dc7b", 00:10:44.468 "strip_size_kb": 64, 00:10:44.468 "state": "configuring", 00:10:44.468 "raid_level": "concat", 00:10:44.468 "superblock": true, 00:10:44.468 "num_base_bdevs": 4, 00:10:44.468 "num_base_bdevs_discovered": 3, 00:10:44.468 "num_base_bdevs_operational": 4, 00:10:44.468 "base_bdevs_list": [ 00:10:44.468 { 00:10:44.468 "name": null, 00:10:44.468 "uuid": "4ca1e6d5-935e-4f7b-844e-acc3fafbacd7", 00:10:44.468 "is_configured": false, 00:10:44.468 "data_offset": 0, 00:10:44.468 "data_size": 63488 00:10:44.468 }, 00:10:44.468 { 00:10:44.468 "name": "BaseBdev2", 00:10:44.468 "uuid": "c0f02614-0abc-4c8d-9643-095a79e64f23", 00:10:44.468 "is_configured": true, 00:10:44.468 "data_offset": 2048, 00:10:44.468 "data_size": 63488 00:10:44.468 }, 00:10:44.468 { 00:10:44.468 "name": "BaseBdev3", 00:10:44.468 "uuid": "974691c7-dcfc-4096-a58d-4ea1d1ce1821", 00:10:44.468 "is_configured": true, 00:10:44.468 "data_offset": 2048, 00:10:44.468 "data_size": 63488 00:10:44.468 }, 00:10:44.468 { 00:10:44.468 "name": "BaseBdev4", 00:10:44.468 "uuid": "9458d516-18ad-4c7f-b9ca-c54cc269e623", 00:10:44.468 "is_configured": true, 00:10:44.468 "data_offset": 2048, 00:10:44.468 "data_size": 63488 00:10:44.468 } 00:10:44.468 ] 00:10:44.468 }' 00:10:44.468 20:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:44.468 20:06:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.759 20:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:44.759 20:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.759 20:06:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.759 20:06:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.759 20:06:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.759 20:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:44.759 20:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.759 20:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:44.759 20:06:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.759 20:06:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.020 20:06:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.020 20:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 4ca1e6d5-935e-4f7b-844e-acc3fafbacd7 00:10:45.020 20:06:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.020 20:06:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.020 [2024-12-08 20:06:16.796523] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:45.020 [2024-12-08 20:06:16.796762] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:45.020 [2024-12-08 20:06:16.796776] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:45.020 [2024-12-08 20:06:16.797043] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:10:45.020 [2024-12-08 20:06:16.797194] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:45.020 [2024-12-08 20:06:16.797206] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:45.020 [2024-12-08 20:06:16.797335] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:45.020 NewBaseBdev 00:10:45.020 20:06:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.020 20:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:45.020 20:06:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:45.020 20:06:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:45.020 20:06:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:45.020 20:06:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:45.020 20:06:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:45.020 20:06:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:45.020 20:06:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.020 20:06:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.020 20:06:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.020 20:06:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:45.020 20:06:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.020 20:06:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.020 [ 00:10:45.020 { 00:10:45.020 "name": "NewBaseBdev", 00:10:45.020 "aliases": [ 00:10:45.020 "4ca1e6d5-935e-4f7b-844e-acc3fafbacd7" 00:10:45.020 ], 00:10:45.020 "product_name": "Malloc disk", 00:10:45.020 "block_size": 512, 00:10:45.020 "num_blocks": 65536, 00:10:45.020 "uuid": "4ca1e6d5-935e-4f7b-844e-acc3fafbacd7", 00:10:45.020 "assigned_rate_limits": { 00:10:45.020 "rw_ios_per_sec": 0, 00:10:45.020 "rw_mbytes_per_sec": 0, 00:10:45.020 "r_mbytes_per_sec": 0, 00:10:45.020 "w_mbytes_per_sec": 0 00:10:45.020 }, 00:10:45.020 "claimed": true, 00:10:45.020 "claim_type": "exclusive_write", 00:10:45.020 "zoned": false, 00:10:45.020 "supported_io_types": { 00:10:45.020 "read": true, 00:10:45.020 "write": true, 00:10:45.020 "unmap": true, 00:10:45.020 "flush": true, 00:10:45.020 "reset": true, 00:10:45.020 "nvme_admin": false, 00:10:45.020 "nvme_io": false, 00:10:45.020 "nvme_io_md": false, 00:10:45.020 "write_zeroes": true, 00:10:45.020 "zcopy": true, 00:10:45.020 "get_zone_info": false, 00:10:45.020 "zone_management": false, 00:10:45.020 "zone_append": false, 00:10:45.020 "compare": false, 00:10:45.020 "compare_and_write": false, 00:10:45.020 "abort": true, 00:10:45.020 "seek_hole": false, 00:10:45.020 "seek_data": false, 00:10:45.020 "copy": true, 00:10:45.020 "nvme_iov_md": false 00:10:45.020 }, 00:10:45.020 "memory_domains": [ 00:10:45.020 { 00:10:45.020 "dma_device_id": "system", 00:10:45.020 "dma_device_type": 1 00:10:45.020 }, 00:10:45.020 { 00:10:45.020 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:45.020 "dma_device_type": 2 00:10:45.020 } 00:10:45.020 ], 00:10:45.020 "driver_specific": {} 00:10:45.020 } 00:10:45.020 ] 00:10:45.020 20:06:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.020 20:06:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:45.020 20:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:10:45.020 20:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:45.020 20:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:45.020 20:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:45.020 20:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:45.020 20:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:45.020 20:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:45.020 20:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:45.020 20:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:45.020 20:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:45.020 20:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.020 20:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:45.020 20:06:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.020 20:06:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.020 20:06:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.020 20:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:45.020 "name": "Existed_Raid", 00:10:45.020 "uuid": "77cc6ae0-46bf-4726-b2b8-94dc7456dc7b", 00:10:45.020 "strip_size_kb": 64, 00:10:45.020 "state": "online", 00:10:45.020 "raid_level": "concat", 00:10:45.020 "superblock": true, 00:10:45.020 "num_base_bdevs": 4, 00:10:45.020 "num_base_bdevs_discovered": 4, 00:10:45.020 "num_base_bdevs_operational": 4, 00:10:45.020 "base_bdevs_list": [ 00:10:45.020 { 00:10:45.020 "name": "NewBaseBdev", 00:10:45.020 "uuid": "4ca1e6d5-935e-4f7b-844e-acc3fafbacd7", 00:10:45.020 "is_configured": true, 00:10:45.020 "data_offset": 2048, 00:10:45.020 "data_size": 63488 00:10:45.020 }, 00:10:45.020 { 00:10:45.020 "name": "BaseBdev2", 00:10:45.020 "uuid": "c0f02614-0abc-4c8d-9643-095a79e64f23", 00:10:45.020 "is_configured": true, 00:10:45.020 "data_offset": 2048, 00:10:45.020 "data_size": 63488 00:10:45.020 }, 00:10:45.020 { 00:10:45.020 "name": "BaseBdev3", 00:10:45.020 "uuid": "974691c7-dcfc-4096-a58d-4ea1d1ce1821", 00:10:45.020 "is_configured": true, 00:10:45.020 "data_offset": 2048, 00:10:45.020 "data_size": 63488 00:10:45.020 }, 00:10:45.020 { 00:10:45.020 "name": "BaseBdev4", 00:10:45.020 "uuid": "9458d516-18ad-4c7f-b9ca-c54cc269e623", 00:10:45.020 "is_configured": true, 00:10:45.020 "data_offset": 2048, 00:10:45.020 "data_size": 63488 00:10:45.020 } 00:10:45.020 ] 00:10:45.020 }' 00:10:45.020 20:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:45.020 20:06:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.590 20:06:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:45.590 20:06:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:45.590 20:06:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:45.590 20:06:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:45.590 20:06:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:45.590 20:06:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:45.590 20:06:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:45.590 20:06:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:45.590 20:06:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.590 20:06:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.590 [2024-12-08 20:06:17.336041] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:45.590 20:06:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.590 20:06:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:45.590 "name": "Existed_Raid", 00:10:45.590 "aliases": [ 00:10:45.590 "77cc6ae0-46bf-4726-b2b8-94dc7456dc7b" 00:10:45.590 ], 00:10:45.590 "product_name": "Raid Volume", 00:10:45.590 "block_size": 512, 00:10:45.590 "num_blocks": 253952, 00:10:45.590 "uuid": "77cc6ae0-46bf-4726-b2b8-94dc7456dc7b", 00:10:45.590 "assigned_rate_limits": { 00:10:45.590 "rw_ios_per_sec": 0, 00:10:45.590 "rw_mbytes_per_sec": 0, 00:10:45.590 "r_mbytes_per_sec": 0, 00:10:45.590 "w_mbytes_per_sec": 0 00:10:45.590 }, 00:10:45.590 "claimed": false, 00:10:45.590 "zoned": false, 00:10:45.590 "supported_io_types": { 00:10:45.590 "read": true, 00:10:45.590 "write": true, 00:10:45.590 "unmap": true, 00:10:45.590 "flush": true, 00:10:45.590 "reset": true, 00:10:45.590 "nvme_admin": false, 00:10:45.590 "nvme_io": false, 00:10:45.590 "nvme_io_md": false, 00:10:45.590 "write_zeroes": true, 00:10:45.590 "zcopy": false, 00:10:45.590 "get_zone_info": false, 00:10:45.590 "zone_management": false, 00:10:45.590 "zone_append": false, 00:10:45.590 "compare": false, 00:10:45.590 "compare_and_write": false, 00:10:45.590 "abort": false, 00:10:45.590 "seek_hole": false, 00:10:45.590 "seek_data": false, 00:10:45.590 "copy": false, 00:10:45.590 "nvme_iov_md": false 00:10:45.590 }, 00:10:45.590 "memory_domains": [ 00:10:45.590 { 00:10:45.590 "dma_device_id": "system", 00:10:45.590 "dma_device_type": 1 00:10:45.590 }, 00:10:45.590 { 00:10:45.590 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:45.590 "dma_device_type": 2 00:10:45.590 }, 00:10:45.590 { 00:10:45.590 "dma_device_id": "system", 00:10:45.590 "dma_device_type": 1 00:10:45.590 }, 00:10:45.590 { 00:10:45.590 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:45.590 "dma_device_type": 2 00:10:45.590 }, 00:10:45.590 { 00:10:45.590 "dma_device_id": "system", 00:10:45.590 "dma_device_type": 1 00:10:45.590 }, 00:10:45.590 { 00:10:45.590 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:45.590 "dma_device_type": 2 00:10:45.590 }, 00:10:45.590 { 00:10:45.590 "dma_device_id": "system", 00:10:45.590 "dma_device_type": 1 00:10:45.590 }, 00:10:45.590 { 00:10:45.590 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:45.590 "dma_device_type": 2 00:10:45.590 } 00:10:45.590 ], 00:10:45.590 "driver_specific": { 00:10:45.590 "raid": { 00:10:45.590 "uuid": "77cc6ae0-46bf-4726-b2b8-94dc7456dc7b", 00:10:45.590 "strip_size_kb": 64, 00:10:45.590 "state": "online", 00:10:45.590 "raid_level": "concat", 00:10:45.590 "superblock": true, 00:10:45.590 "num_base_bdevs": 4, 00:10:45.590 "num_base_bdevs_discovered": 4, 00:10:45.590 "num_base_bdevs_operational": 4, 00:10:45.590 "base_bdevs_list": [ 00:10:45.590 { 00:10:45.590 "name": "NewBaseBdev", 00:10:45.590 "uuid": "4ca1e6d5-935e-4f7b-844e-acc3fafbacd7", 00:10:45.590 "is_configured": true, 00:10:45.590 "data_offset": 2048, 00:10:45.590 "data_size": 63488 00:10:45.590 }, 00:10:45.590 { 00:10:45.590 "name": "BaseBdev2", 00:10:45.590 "uuid": "c0f02614-0abc-4c8d-9643-095a79e64f23", 00:10:45.590 "is_configured": true, 00:10:45.590 "data_offset": 2048, 00:10:45.590 "data_size": 63488 00:10:45.590 }, 00:10:45.590 { 00:10:45.590 "name": "BaseBdev3", 00:10:45.590 "uuid": "974691c7-dcfc-4096-a58d-4ea1d1ce1821", 00:10:45.590 "is_configured": true, 00:10:45.590 "data_offset": 2048, 00:10:45.590 "data_size": 63488 00:10:45.590 }, 00:10:45.590 { 00:10:45.590 "name": "BaseBdev4", 00:10:45.590 "uuid": "9458d516-18ad-4c7f-b9ca-c54cc269e623", 00:10:45.590 "is_configured": true, 00:10:45.590 "data_offset": 2048, 00:10:45.590 "data_size": 63488 00:10:45.590 } 00:10:45.590 ] 00:10:45.590 } 00:10:45.590 } 00:10:45.590 }' 00:10:45.590 20:06:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:45.590 20:06:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:45.590 BaseBdev2 00:10:45.590 BaseBdev3 00:10:45.590 BaseBdev4' 00:10:45.590 20:06:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:45.590 20:06:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:45.590 20:06:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:45.590 20:06:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:45.590 20:06:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:45.590 20:06:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.590 20:06:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.590 20:06:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.590 20:06:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:45.590 20:06:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:45.590 20:06:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:45.590 20:06:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:45.590 20:06:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.590 20:06:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.590 20:06:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:45.590 20:06:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.590 20:06:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:45.590 20:06:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:45.590 20:06:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:45.590 20:06:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:45.590 20:06:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.590 20:06:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.590 20:06:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:45.590 20:06:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.850 20:06:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:45.850 20:06:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:45.850 20:06:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:45.850 20:06:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:45.850 20:06:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.850 20:06:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.850 20:06:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:45.850 20:06:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.850 20:06:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:45.850 20:06:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:45.850 20:06:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:45.850 20:06:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.850 20:06:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.850 [2024-12-08 20:06:17.643221] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:45.850 [2024-12-08 20:06:17.643253] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:45.850 [2024-12-08 20:06:17.643329] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:45.850 [2024-12-08 20:06:17.643398] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:45.850 [2024-12-08 20:06:17.643407] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:45.850 20:06:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.850 20:06:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 71757 00:10:45.851 20:06:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 71757 ']' 00:10:45.851 20:06:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 71757 00:10:45.851 20:06:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:10:45.851 20:06:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:45.851 20:06:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71757 00:10:45.851 killing process with pid 71757 00:10:45.851 20:06:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:45.851 20:06:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:45.851 20:06:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71757' 00:10:45.851 20:06:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 71757 00:10:45.851 [2024-12-08 20:06:17.681374] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:45.851 20:06:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 71757 00:10:46.110 [2024-12-08 20:06:18.066833] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:47.490 20:06:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:47.490 00:10:47.490 real 0m11.612s 00:10:47.490 user 0m18.574s 00:10:47.490 sys 0m1.983s 00:10:47.490 20:06:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:47.490 20:06:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.490 ************************************ 00:10:47.490 END TEST raid_state_function_test_sb 00:10:47.490 ************************************ 00:10:47.490 20:06:19 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:10:47.490 20:06:19 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:47.490 20:06:19 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:47.490 20:06:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:47.490 ************************************ 00:10:47.490 START TEST raid_superblock_test 00:10:47.490 ************************************ 00:10:47.490 20:06:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 4 00:10:47.490 20:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:10:47.490 20:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:10:47.490 20:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:47.490 20:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:47.490 20:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:47.490 20:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:47.490 20:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:47.490 20:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:47.490 20:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:47.490 20:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:47.490 20:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:47.490 20:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:47.490 20:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:47.490 20:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:10:47.490 20:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:10:47.490 20:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:10:47.490 20:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=72426 00:10:47.490 20:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:47.490 20:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 72426 00:10:47.490 20:06:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 72426 ']' 00:10:47.490 20:06:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:47.490 20:06:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:47.490 20:06:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:47.490 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:47.490 20:06:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:47.490 20:06:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.490 [2024-12-08 20:06:19.337263] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:10:47.490 [2024-12-08 20:06:19.337470] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72426 ] 00:10:47.749 [2024-12-08 20:06:19.511368] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:47.749 [2024-12-08 20:06:19.627109] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:48.010 [2024-12-08 20:06:19.826162] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:48.010 [2024-12-08 20:06:19.826265] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:48.270 20:06:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:48.270 20:06:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:10:48.270 20:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:48.270 20:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:48.270 20:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:48.270 20:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:48.270 20:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:48.270 20:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:48.270 20:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:48.270 20:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:48.270 20:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:48.270 20:06:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.270 20:06:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.270 malloc1 00:10:48.270 20:06:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.270 20:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:48.270 20:06:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.270 20:06:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.270 [2024-12-08 20:06:20.208866] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:48.270 [2024-12-08 20:06:20.208931] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:48.270 [2024-12-08 20:06:20.209033] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:48.270 [2024-12-08 20:06:20.209060] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:48.270 [2024-12-08 20:06:20.211169] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:48.270 [2024-12-08 20:06:20.211209] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:48.270 pt1 00:10:48.270 20:06:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.270 20:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:48.270 20:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:48.270 20:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:48.270 20:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:48.270 20:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:48.270 20:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:48.270 20:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:48.270 20:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:48.270 20:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:48.270 20:06:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.270 20:06:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.531 malloc2 00:10:48.531 20:06:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.531 20:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:48.531 20:06:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.531 20:06:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.531 [2024-12-08 20:06:20.261679] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:48.531 [2024-12-08 20:06:20.261795] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:48.531 [2024-12-08 20:06:20.261837] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:48.531 [2024-12-08 20:06:20.261867] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:48.531 [2024-12-08 20:06:20.263994] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:48.531 [2024-12-08 20:06:20.264064] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:48.531 pt2 00:10:48.531 20:06:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.531 20:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:48.531 20:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:48.531 20:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:10:48.531 20:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:10:48.531 20:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:10:48.531 20:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:48.531 20:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:48.531 20:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:48.531 20:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:10:48.531 20:06:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.531 20:06:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.531 malloc3 00:10:48.531 20:06:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.531 20:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:48.531 20:06:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.531 20:06:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.531 [2024-12-08 20:06:20.349812] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:48.531 [2024-12-08 20:06:20.349925] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:48.531 [2024-12-08 20:06:20.349995] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:48.531 [2024-12-08 20:06:20.350032] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:48.531 [2024-12-08 20:06:20.352187] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:48.531 [2024-12-08 20:06:20.352261] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:48.531 pt3 00:10:48.531 20:06:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.531 20:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:48.531 20:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:48.531 20:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:10:48.531 20:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:10:48.531 20:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:10:48.531 20:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:48.531 20:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:48.531 20:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:48.531 20:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:10:48.531 20:06:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.531 20:06:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.531 malloc4 00:10:48.531 20:06:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.531 20:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:48.531 20:06:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.531 20:06:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.531 [2024-12-08 20:06:20.407616] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:48.531 [2024-12-08 20:06:20.407672] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:48.531 [2024-12-08 20:06:20.407692] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:48.531 [2024-12-08 20:06:20.407701] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:48.531 [2024-12-08 20:06:20.409735] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:48.531 [2024-12-08 20:06:20.409772] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:48.531 pt4 00:10:48.531 20:06:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.531 20:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:48.531 20:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:48.531 20:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:10:48.531 20:06:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.531 20:06:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.531 [2024-12-08 20:06:20.419635] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:48.531 [2024-12-08 20:06:20.421343] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:48.531 [2024-12-08 20:06:20.421429] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:48.531 [2024-12-08 20:06:20.421476] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:48.531 [2024-12-08 20:06:20.421658] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:48.531 [2024-12-08 20:06:20.421680] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:48.531 [2024-12-08 20:06:20.421908] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:48.531 [2024-12-08 20:06:20.422103] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:48.531 [2024-12-08 20:06:20.422116] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:10:48.531 [2024-12-08 20:06:20.422246] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:48.531 20:06:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.531 20:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:10:48.531 20:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:48.531 20:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:48.531 20:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:48.531 20:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:48.531 20:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:48.531 20:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:48.531 20:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:48.531 20:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:48.531 20:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:48.531 20:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.531 20:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:48.531 20:06:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.532 20:06:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.532 20:06:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.532 20:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:48.532 "name": "raid_bdev1", 00:10:48.532 "uuid": "c6f0a4c9-e67a-4a90-9da8-6e1661f13e06", 00:10:48.532 "strip_size_kb": 64, 00:10:48.532 "state": "online", 00:10:48.532 "raid_level": "concat", 00:10:48.532 "superblock": true, 00:10:48.532 "num_base_bdevs": 4, 00:10:48.532 "num_base_bdevs_discovered": 4, 00:10:48.532 "num_base_bdevs_operational": 4, 00:10:48.532 "base_bdevs_list": [ 00:10:48.532 { 00:10:48.532 "name": "pt1", 00:10:48.532 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:48.532 "is_configured": true, 00:10:48.532 "data_offset": 2048, 00:10:48.532 "data_size": 63488 00:10:48.532 }, 00:10:48.532 { 00:10:48.532 "name": "pt2", 00:10:48.532 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:48.532 "is_configured": true, 00:10:48.532 "data_offset": 2048, 00:10:48.532 "data_size": 63488 00:10:48.532 }, 00:10:48.532 { 00:10:48.532 "name": "pt3", 00:10:48.532 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:48.532 "is_configured": true, 00:10:48.532 "data_offset": 2048, 00:10:48.532 "data_size": 63488 00:10:48.532 }, 00:10:48.532 { 00:10:48.532 "name": "pt4", 00:10:48.532 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:48.532 "is_configured": true, 00:10:48.532 "data_offset": 2048, 00:10:48.532 "data_size": 63488 00:10:48.532 } 00:10:48.532 ] 00:10:48.532 }' 00:10:48.532 20:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:48.532 20:06:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.102 20:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:49.102 20:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:49.102 20:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:49.102 20:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:49.102 20:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:49.102 20:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:49.102 20:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:49.102 20:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:49.102 20:06:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.102 20:06:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.102 [2024-12-08 20:06:20.863269] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:49.102 20:06:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.102 20:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:49.102 "name": "raid_bdev1", 00:10:49.102 "aliases": [ 00:10:49.102 "c6f0a4c9-e67a-4a90-9da8-6e1661f13e06" 00:10:49.102 ], 00:10:49.102 "product_name": "Raid Volume", 00:10:49.102 "block_size": 512, 00:10:49.102 "num_blocks": 253952, 00:10:49.102 "uuid": "c6f0a4c9-e67a-4a90-9da8-6e1661f13e06", 00:10:49.102 "assigned_rate_limits": { 00:10:49.102 "rw_ios_per_sec": 0, 00:10:49.102 "rw_mbytes_per_sec": 0, 00:10:49.102 "r_mbytes_per_sec": 0, 00:10:49.102 "w_mbytes_per_sec": 0 00:10:49.102 }, 00:10:49.102 "claimed": false, 00:10:49.102 "zoned": false, 00:10:49.102 "supported_io_types": { 00:10:49.102 "read": true, 00:10:49.102 "write": true, 00:10:49.102 "unmap": true, 00:10:49.102 "flush": true, 00:10:49.102 "reset": true, 00:10:49.102 "nvme_admin": false, 00:10:49.102 "nvme_io": false, 00:10:49.102 "nvme_io_md": false, 00:10:49.102 "write_zeroes": true, 00:10:49.102 "zcopy": false, 00:10:49.102 "get_zone_info": false, 00:10:49.102 "zone_management": false, 00:10:49.102 "zone_append": false, 00:10:49.102 "compare": false, 00:10:49.102 "compare_and_write": false, 00:10:49.102 "abort": false, 00:10:49.102 "seek_hole": false, 00:10:49.102 "seek_data": false, 00:10:49.102 "copy": false, 00:10:49.102 "nvme_iov_md": false 00:10:49.102 }, 00:10:49.102 "memory_domains": [ 00:10:49.102 { 00:10:49.102 "dma_device_id": "system", 00:10:49.102 "dma_device_type": 1 00:10:49.102 }, 00:10:49.102 { 00:10:49.102 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:49.102 "dma_device_type": 2 00:10:49.102 }, 00:10:49.102 { 00:10:49.102 "dma_device_id": "system", 00:10:49.102 "dma_device_type": 1 00:10:49.102 }, 00:10:49.102 { 00:10:49.102 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:49.102 "dma_device_type": 2 00:10:49.102 }, 00:10:49.102 { 00:10:49.102 "dma_device_id": "system", 00:10:49.102 "dma_device_type": 1 00:10:49.102 }, 00:10:49.102 { 00:10:49.102 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:49.102 "dma_device_type": 2 00:10:49.102 }, 00:10:49.102 { 00:10:49.102 "dma_device_id": "system", 00:10:49.102 "dma_device_type": 1 00:10:49.102 }, 00:10:49.102 { 00:10:49.102 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:49.102 "dma_device_type": 2 00:10:49.102 } 00:10:49.102 ], 00:10:49.102 "driver_specific": { 00:10:49.102 "raid": { 00:10:49.102 "uuid": "c6f0a4c9-e67a-4a90-9da8-6e1661f13e06", 00:10:49.102 "strip_size_kb": 64, 00:10:49.102 "state": "online", 00:10:49.102 "raid_level": "concat", 00:10:49.102 "superblock": true, 00:10:49.102 "num_base_bdevs": 4, 00:10:49.102 "num_base_bdevs_discovered": 4, 00:10:49.102 "num_base_bdevs_operational": 4, 00:10:49.102 "base_bdevs_list": [ 00:10:49.102 { 00:10:49.102 "name": "pt1", 00:10:49.102 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:49.102 "is_configured": true, 00:10:49.102 "data_offset": 2048, 00:10:49.102 "data_size": 63488 00:10:49.102 }, 00:10:49.102 { 00:10:49.102 "name": "pt2", 00:10:49.102 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:49.102 "is_configured": true, 00:10:49.102 "data_offset": 2048, 00:10:49.102 "data_size": 63488 00:10:49.102 }, 00:10:49.102 { 00:10:49.102 "name": "pt3", 00:10:49.102 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:49.102 "is_configured": true, 00:10:49.102 "data_offset": 2048, 00:10:49.102 "data_size": 63488 00:10:49.102 }, 00:10:49.102 { 00:10:49.102 "name": "pt4", 00:10:49.102 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:49.102 "is_configured": true, 00:10:49.102 "data_offset": 2048, 00:10:49.102 "data_size": 63488 00:10:49.102 } 00:10:49.102 ] 00:10:49.102 } 00:10:49.102 } 00:10:49.102 }' 00:10:49.102 20:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:49.102 20:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:49.102 pt2 00:10:49.102 pt3 00:10:49.102 pt4' 00:10:49.102 20:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:49.102 20:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:49.102 20:06:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:49.102 20:06:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:49.102 20:06:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:49.102 20:06:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.102 20:06:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.102 20:06:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.102 20:06:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:49.102 20:06:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:49.102 20:06:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:49.102 20:06:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:49.102 20:06:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:49.102 20:06:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.102 20:06:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.102 20:06:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.362 20:06:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:49.362 20:06:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:49.362 20:06:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:49.362 20:06:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:49.362 20:06:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:49.362 20:06:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.362 20:06:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.362 20:06:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.362 20:06:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:49.362 20:06:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:49.362 20:06:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:49.362 20:06:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:10:49.362 20:06:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.362 20:06:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.362 20:06:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:49.362 20:06:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.362 20:06:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:49.362 20:06:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:49.363 20:06:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:49.363 20:06:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:49.363 20:06:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.363 20:06:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.363 [2024-12-08 20:06:21.186612] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:49.363 20:06:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.363 20:06:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=c6f0a4c9-e67a-4a90-9da8-6e1661f13e06 00:10:49.363 20:06:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z c6f0a4c9-e67a-4a90-9da8-6e1661f13e06 ']' 00:10:49.363 20:06:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:49.363 20:06:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.363 20:06:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.363 [2024-12-08 20:06:21.250225] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:49.363 [2024-12-08 20:06:21.250293] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:49.363 [2024-12-08 20:06:21.250392] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:49.363 [2024-12-08 20:06:21.250505] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:49.363 [2024-12-08 20:06:21.250566] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:10:49.363 20:06:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.363 20:06:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:49.363 20:06:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:49.363 20:06:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.363 20:06:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.363 20:06:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.363 20:06:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:49.363 20:06:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:49.363 20:06:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:49.363 20:06:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:10:49.363 20:06:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.363 20:06:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.363 20:06:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.363 20:06:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:49.363 20:06:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:10:49.363 20:06:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.363 20:06:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.363 20:06:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.363 20:06:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:49.363 20:06:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:10:49.363 20:06:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.363 20:06:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.363 20:06:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.363 20:06:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:49.363 20:06:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:10:49.363 20:06:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.363 20:06:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.363 20:06:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.623 20:06:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:10:49.623 20:06:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:49.623 20:06:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.623 20:06:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.623 20:06:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.623 20:06:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:49.623 20:06:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:49.623 20:06:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:10:49.623 20:06:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:49.623 20:06:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:10:49.623 20:06:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:49.623 20:06:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:10:49.623 20:06:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:49.623 20:06:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:49.623 20:06:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.623 20:06:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.623 [2024-12-08 20:06:21.398009] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:49.623 [2024-12-08 20:06:21.400021] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:49.623 [2024-12-08 20:06:21.400075] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:10:49.623 [2024-12-08 20:06:21.400111] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:10:49.623 [2024-12-08 20:06:21.400165] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:49.623 [2024-12-08 20:06:21.400219] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:49.623 [2024-12-08 20:06:21.400242] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:10:49.623 [2024-12-08 20:06:21.400263] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:10:49.623 [2024-12-08 20:06:21.400278] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:49.623 [2024-12-08 20:06:21.400290] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:10:49.623 request: 00:10:49.623 { 00:10:49.623 "name": "raid_bdev1", 00:10:49.623 "raid_level": "concat", 00:10:49.623 "base_bdevs": [ 00:10:49.623 "malloc1", 00:10:49.623 "malloc2", 00:10:49.623 "malloc3", 00:10:49.623 "malloc4" 00:10:49.623 ], 00:10:49.623 "strip_size_kb": 64, 00:10:49.623 "superblock": false, 00:10:49.623 "method": "bdev_raid_create", 00:10:49.623 "req_id": 1 00:10:49.623 } 00:10:49.623 Got JSON-RPC error response 00:10:49.623 response: 00:10:49.623 { 00:10:49.623 "code": -17, 00:10:49.623 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:49.623 } 00:10:49.623 20:06:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:49.623 20:06:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:10:49.623 20:06:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:49.623 20:06:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:49.623 20:06:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:49.623 20:06:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:49.623 20:06:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:49.624 20:06:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.624 20:06:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.624 20:06:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.624 20:06:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:49.624 20:06:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:49.624 20:06:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:49.624 20:06:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.624 20:06:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.624 [2024-12-08 20:06:21.453869] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:49.624 [2024-12-08 20:06:21.453969] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:49.624 [2024-12-08 20:06:21.454024] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:49.624 [2024-12-08 20:06:21.454068] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:49.624 [2024-12-08 20:06:21.456397] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:49.624 [2024-12-08 20:06:21.456474] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:49.624 [2024-12-08 20:06:21.456598] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:49.624 [2024-12-08 20:06:21.456711] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:49.624 pt1 00:10:49.624 20:06:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.624 20:06:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:10:49.624 20:06:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:49.624 20:06:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:49.624 20:06:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:49.624 20:06:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:49.624 20:06:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:49.624 20:06:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:49.624 20:06:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:49.624 20:06:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:49.624 20:06:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:49.624 20:06:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:49.624 20:06:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:49.624 20:06:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.624 20:06:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.624 20:06:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.624 20:06:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:49.624 "name": "raid_bdev1", 00:10:49.624 "uuid": "c6f0a4c9-e67a-4a90-9da8-6e1661f13e06", 00:10:49.624 "strip_size_kb": 64, 00:10:49.624 "state": "configuring", 00:10:49.624 "raid_level": "concat", 00:10:49.624 "superblock": true, 00:10:49.624 "num_base_bdevs": 4, 00:10:49.624 "num_base_bdevs_discovered": 1, 00:10:49.624 "num_base_bdevs_operational": 4, 00:10:49.624 "base_bdevs_list": [ 00:10:49.624 { 00:10:49.624 "name": "pt1", 00:10:49.624 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:49.624 "is_configured": true, 00:10:49.624 "data_offset": 2048, 00:10:49.624 "data_size": 63488 00:10:49.624 }, 00:10:49.624 { 00:10:49.624 "name": null, 00:10:49.624 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:49.624 "is_configured": false, 00:10:49.624 "data_offset": 2048, 00:10:49.624 "data_size": 63488 00:10:49.624 }, 00:10:49.624 { 00:10:49.624 "name": null, 00:10:49.624 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:49.624 "is_configured": false, 00:10:49.624 "data_offset": 2048, 00:10:49.624 "data_size": 63488 00:10:49.624 }, 00:10:49.624 { 00:10:49.624 "name": null, 00:10:49.624 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:49.624 "is_configured": false, 00:10:49.624 "data_offset": 2048, 00:10:49.624 "data_size": 63488 00:10:49.624 } 00:10:49.624 ] 00:10:49.624 }' 00:10:49.624 20:06:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:49.624 20:06:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.193 20:06:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:10:50.193 20:06:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:50.193 20:06:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.193 20:06:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.193 [2024-12-08 20:06:21.933111] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:50.193 [2024-12-08 20:06:21.933243] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:50.193 [2024-12-08 20:06:21.933270] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:10:50.193 [2024-12-08 20:06:21.933298] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:50.193 [2024-12-08 20:06:21.933753] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:50.193 [2024-12-08 20:06:21.933783] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:50.193 [2024-12-08 20:06:21.933867] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:50.193 [2024-12-08 20:06:21.933893] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:50.193 pt2 00:10:50.193 20:06:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.193 20:06:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:10:50.193 20:06:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.193 20:06:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.193 [2024-12-08 20:06:21.945118] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:10:50.193 20:06:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.193 20:06:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:10:50.193 20:06:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:50.193 20:06:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:50.193 20:06:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:50.193 20:06:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:50.193 20:06:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:50.194 20:06:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:50.194 20:06:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:50.194 20:06:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:50.194 20:06:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:50.194 20:06:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:50.194 20:06:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.194 20:06:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.194 20:06:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.194 20:06:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.194 20:06:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:50.194 "name": "raid_bdev1", 00:10:50.194 "uuid": "c6f0a4c9-e67a-4a90-9da8-6e1661f13e06", 00:10:50.194 "strip_size_kb": 64, 00:10:50.194 "state": "configuring", 00:10:50.194 "raid_level": "concat", 00:10:50.194 "superblock": true, 00:10:50.194 "num_base_bdevs": 4, 00:10:50.194 "num_base_bdevs_discovered": 1, 00:10:50.194 "num_base_bdevs_operational": 4, 00:10:50.194 "base_bdevs_list": [ 00:10:50.194 { 00:10:50.194 "name": "pt1", 00:10:50.194 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:50.194 "is_configured": true, 00:10:50.194 "data_offset": 2048, 00:10:50.194 "data_size": 63488 00:10:50.194 }, 00:10:50.194 { 00:10:50.194 "name": null, 00:10:50.194 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:50.194 "is_configured": false, 00:10:50.194 "data_offset": 0, 00:10:50.194 "data_size": 63488 00:10:50.194 }, 00:10:50.194 { 00:10:50.194 "name": null, 00:10:50.194 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:50.194 "is_configured": false, 00:10:50.194 "data_offset": 2048, 00:10:50.194 "data_size": 63488 00:10:50.194 }, 00:10:50.194 { 00:10:50.194 "name": null, 00:10:50.194 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:50.194 "is_configured": false, 00:10:50.194 "data_offset": 2048, 00:10:50.194 "data_size": 63488 00:10:50.194 } 00:10:50.194 ] 00:10:50.194 }' 00:10:50.194 20:06:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:50.194 20:06:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.453 20:06:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:10:50.453 20:06:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:50.453 20:06:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:50.453 20:06:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.453 20:06:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.453 [2024-12-08 20:06:22.360373] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:50.453 [2024-12-08 20:06:22.360486] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:50.453 [2024-12-08 20:06:22.360563] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:10:50.453 [2024-12-08 20:06:22.360604] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:50.453 [2024-12-08 20:06:22.361126] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:50.453 [2024-12-08 20:06:22.361187] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:50.453 [2024-12-08 20:06:22.361329] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:50.453 [2024-12-08 20:06:22.361383] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:50.453 pt2 00:10:50.453 20:06:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.453 20:06:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:50.453 20:06:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:50.453 20:06:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:50.453 20:06:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.453 20:06:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.453 [2024-12-08 20:06:22.372312] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:50.453 [2024-12-08 20:06:22.372392] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:50.453 [2024-12-08 20:06:22.372443] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:10:50.453 [2024-12-08 20:06:22.372472] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:50.453 [2024-12-08 20:06:22.372889] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:50.453 [2024-12-08 20:06:22.372954] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:50.453 [2024-12-08 20:06:22.373063] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:50.453 [2024-12-08 20:06:22.373122] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:50.453 pt3 00:10:50.453 20:06:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.453 20:06:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:50.453 20:06:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:50.453 20:06:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:50.453 20:06:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.453 20:06:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.453 [2024-12-08 20:06:22.384268] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:50.453 [2024-12-08 20:06:22.384309] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:50.453 [2024-12-08 20:06:22.384324] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:10:50.453 [2024-12-08 20:06:22.384332] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:50.453 [2024-12-08 20:06:22.384675] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:50.453 [2024-12-08 20:06:22.384691] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:50.453 [2024-12-08 20:06:22.384746] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:10:50.453 [2024-12-08 20:06:22.384766] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:50.453 [2024-12-08 20:06:22.384892] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:50.453 [2024-12-08 20:06:22.384901] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:50.453 [2024-12-08 20:06:22.385137] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:10:50.453 [2024-12-08 20:06:22.385276] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:50.453 [2024-12-08 20:06:22.385290] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:10:50.453 [2024-12-08 20:06:22.385418] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:50.453 pt4 00:10:50.453 20:06:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.453 20:06:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:50.453 20:06:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:50.453 20:06:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:10:50.453 20:06:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:50.453 20:06:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:50.453 20:06:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:50.453 20:06:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:50.453 20:06:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:50.453 20:06:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:50.453 20:06:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:50.453 20:06:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:50.453 20:06:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:50.453 20:06:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:50.453 20:06:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.453 20:06:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.453 20:06:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.453 20:06:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.453 20:06:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:50.453 "name": "raid_bdev1", 00:10:50.453 "uuid": "c6f0a4c9-e67a-4a90-9da8-6e1661f13e06", 00:10:50.453 "strip_size_kb": 64, 00:10:50.453 "state": "online", 00:10:50.453 "raid_level": "concat", 00:10:50.453 "superblock": true, 00:10:50.453 "num_base_bdevs": 4, 00:10:50.453 "num_base_bdevs_discovered": 4, 00:10:50.453 "num_base_bdevs_operational": 4, 00:10:50.453 "base_bdevs_list": [ 00:10:50.453 { 00:10:50.453 "name": "pt1", 00:10:50.453 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:50.453 "is_configured": true, 00:10:50.453 "data_offset": 2048, 00:10:50.453 "data_size": 63488 00:10:50.453 }, 00:10:50.453 { 00:10:50.453 "name": "pt2", 00:10:50.453 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:50.453 "is_configured": true, 00:10:50.453 "data_offset": 2048, 00:10:50.453 "data_size": 63488 00:10:50.453 }, 00:10:50.453 { 00:10:50.453 "name": "pt3", 00:10:50.453 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:50.453 "is_configured": true, 00:10:50.453 "data_offset": 2048, 00:10:50.453 "data_size": 63488 00:10:50.453 }, 00:10:50.453 { 00:10:50.453 "name": "pt4", 00:10:50.453 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:50.453 "is_configured": true, 00:10:50.453 "data_offset": 2048, 00:10:50.453 "data_size": 63488 00:10:50.453 } 00:10:50.453 ] 00:10:50.453 }' 00:10:50.453 20:06:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:50.453 20:06:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.022 20:06:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:10:51.022 20:06:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:51.022 20:06:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:51.022 20:06:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:51.022 20:06:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:51.022 20:06:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:51.022 20:06:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:51.022 20:06:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.022 20:06:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.022 20:06:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:51.022 [2024-12-08 20:06:22.807955] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:51.022 20:06:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.022 20:06:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:51.022 "name": "raid_bdev1", 00:10:51.022 "aliases": [ 00:10:51.022 "c6f0a4c9-e67a-4a90-9da8-6e1661f13e06" 00:10:51.022 ], 00:10:51.022 "product_name": "Raid Volume", 00:10:51.022 "block_size": 512, 00:10:51.022 "num_blocks": 253952, 00:10:51.022 "uuid": "c6f0a4c9-e67a-4a90-9da8-6e1661f13e06", 00:10:51.022 "assigned_rate_limits": { 00:10:51.022 "rw_ios_per_sec": 0, 00:10:51.022 "rw_mbytes_per_sec": 0, 00:10:51.022 "r_mbytes_per_sec": 0, 00:10:51.022 "w_mbytes_per_sec": 0 00:10:51.022 }, 00:10:51.022 "claimed": false, 00:10:51.022 "zoned": false, 00:10:51.022 "supported_io_types": { 00:10:51.022 "read": true, 00:10:51.022 "write": true, 00:10:51.022 "unmap": true, 00:10:51.022 "flush": true, 00:10:51.022 "reset": true, 00:10:51.022 "nvme_admin": false, 00:10:51.022 "nvme_io": false, 00:10:51.022 "nvme_io_md": false, 00:10:51.022 "write_zeroes": true, 00:10:51.022 "zcopy": false, 00:10:51.022 "get_zone_info": false, 00:10:51.022 "zone_management": false, 00:10:51.022 "zone_append": false, 00:10:51.022 "compare": false, 00:10:51.022 "compare_and_write": false, 00:10:51.022 "abort": false, 00:10:51.022 "seek_hole": false, 00:10:51.022 "seek_data": false, 00:10:51.022 "copy": false, 00:10:51.022 "nvme_iov_md": false 00:10:51.022 }, 00:10:51.022 "memory_domains": [ 00:10:51.022 { 00:10:51.022 "dma_device_id": "system", 00:10:51.022 "dma_device_type": 1 00:10:51.022 }, 00:10:51.022 { 00:10:51.022 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:51.022 "dma_device_type": 2 00:10:51.022 }, 00:10:51.022 { 00:10:51.022 "dma_device_id": "system", 00:10:51.022 "dma_device_type": 1 00:10:51.022 }, 00:10:51.022 { 00:10:51.022 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:51.022 "dma_device_type": 2 00:10:51.022 }, 00:10:51.022 { 00:10:51.022 "dma_device_id": "system", 00:10:51.022 "dma_device_type": 1 00:10:51.022 }, 00:10:51.022 { 00:10:51.022 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:51.022 "dma_device_type": 2 00:10:51.022 }, 00:10:51.022 { 00:10:51.022 "dma_device_id": "system", 00:10:51.022 "dma_device_type": 1 00:10:51.022 }, 00:10:51.022 { 00:10:51.022 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:51.022 "dma_device_type": 2 00:10:51.022 } 00:10:51.022 ], 00:10:51.022 "driver_specific": { 00:10:51.022 "raid": { 00:10:51.022 "uuid": "c6f0a4c9-e67a-4a90-9da8-6e1661f13e06", 00:10:51.022 "strip_size_kb": 64, 00:10:51.022 "state": "online", 00:10:51.022 "raid_level": "concat", 00:10:51.022 "superblock": true, 00:10:51.022 "num_base_bdevs": 4, 00:10:51.022 "num_base_bdevs_discovered": 4, 00:10:51.022 "num_base_bdevs_operational": 4, 00:10:51.022 "base_bdevs_list": [ 00:10:51.022 { 00:10:51.022 "name": "pt1", 00:10:51.022 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:51.022 "is_configured": true, 00:10:51.022 "data_offset": 2048, 00:10:51.022 "data_size": 63488 00:10:51.022 }, 00:10:51.022 { 00:10:51.022 "name": "pt2", 00:10:51.022 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:51.022 "is_configured": true, 00:10:51.022 "data_offset": 2048, 00:10:51.022 "data_size": 63488 00:10:51.022 }, 00:10:51.022 { 00:10:51.022 "name": "pt3", 00:10:51.022 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:51.022 "is_configured": true, 00:10:51.022 "data_offset": 2048, 00:10:51.022 "data_size": 63488 00:10:51.022 }, 00:10:51.022 { 00:10:51.022 "name": "pt4", 00:10:51.022 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:51.022 "is_configured": true, 00:10:51.022 "data_offset": 2048, 00:10:51.022 "data_size": 63488 00:10:51.022 } 00:10:51.022 ] 00:10:51.022 } 00:10:51.022 } 00:10:51.022 }' 00:10:51.022 20:06:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:51.022 20:06:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:51.022 pt2 00:10:51.022 pt3 00:10:51.022 pt4' 00:10:51.022 20:06:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:51.022 20:06:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:51.022 20:06:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:51.022 20:06:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:51.022 20:06:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.022 20:06:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.022 20:06:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:51.022 20:06:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.022 20:06:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:51.022 20:06:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:51.022 20:06:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:51.281 20:06:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:51.281 20:06:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:51.281 20:06:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.281 20:06:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.281 20:06:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.281 20:06:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:51.281 20:06:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:51.281 20:06:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:51.281 20:06:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:51.281 20:06:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.281 20:06:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.281 20:06:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:51.281 20:06:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.281 20:06:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:51.281 20:06:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:51.281 20:06:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:51.281 20:06:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:10:51.281 20:06:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.281 20:06:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.281 20:06:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:51.281 20:06:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.281 20:06:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:51.281 20:06:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:51.281 20:06:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:51.281 20:06:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.281 20:06:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.281 20:06:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:10:51.281 [2024-12-08 20:06:23.131343] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:51.281 20:06:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.281 20:06:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' c6f0a4c9-e67a-4a90-9da8-6e1661f13e06 '!=' c6f0a4c9-e67a-4a90-9da8-6e1661f13e06 ']' 00:10:51.281 20:06:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:10:51.281 20:06:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:51.281 20:06:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:51.281 20:06:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 72426 00:10:51.281 20:06:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 72426 ']' 00:10:51.281 20:06:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 72426 00:10:51.281 20:06:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:10:51.281 20:06:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:51.281 20:06:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72426 00:10:51.281 killing process with pid 72426 00:10:51.281 20:06:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:51.281 20:06:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:51.281 20:06:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72426' 00:10:51.281 20:06:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 72426 00:10:51.281 [2024-12-08 20:06:23.188125] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:51.281 [2024-12-08 20:06:23.188210] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:51.281 [2024-12-08 20:06:23.188287] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:51.281 [2024-12-08 20:06:23.188297] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:10:51.281 20:06:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 72426 00:10:51.849 [2024-12-08 20:06:23.573233] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:52.785 20:06:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:10:52.785 00:10:52.785 real 0m5.428s 00:10:52.785 user 0m7.758s 00:10:52.785 sys 0m0.927s 00:10:52.785 ************************************ 00:10:52.785 END TEST raid_superblock_test 00:10:52.785 ************************************ 00:10:52.785 20:06:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:52.785 20:06:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.785 20:06:24 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:10:52.785 20:06:24 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:52.785 20:06:24 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:52.785 20:06:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:52.785 ************************************ 00:10:52.785 START TEST raid_read_error_test 00:10:52.785 ************************************ 00:10:52.785 20:06:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 4 read 00:10:52.785 20:06:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:10:52.785 20:06:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:10:52.785 20:06:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:10:52.785 20:06:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:52.785 20:06:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:52.785 20:06:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:52.785 20:06:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:52.785 20:06:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:52.785 20:06:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:52.785 20:06:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:52.785 20:06:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:52.785 20:06:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:52.785 20:06:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:52.785 20:06:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:52.785 20:06:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:10:52.785 20:06:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:52.785 20:06:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:52.785 20:06:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:52.785 20:06:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:52.785 20:06:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:52.785 20:06:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:52.785 20:06:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:52.786 20:06:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:52.786 20:06:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:52.786 20:06:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:10:52.786 20:06:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:52.786 20:06:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:52.786 20:06:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:52.786 20:06:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.0J46PT9NnN 00:10:53.044 20:06:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=72685 00:10:53.044 20:06:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 72685 00:10:53.044 20:06:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:53.044 20:06:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 72685 ']' 00:10:53.044 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:53.044 20:06:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:53.044 20:06:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:53.044 20:06:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:53.044 20:06:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:53.044 20:06:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.044 [2024-12-08 20:06:24.847247] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:10:53.044 [2024-12-08 20:06:24.847368] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72685 ] 00:10:53.302 [2024-12-08 20:06:25.021857] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:53.302 [2024-12-08 20:06:25.140344] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:53.561 [2024-12-08 20:06:25.333376] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:53.561 [2024-12-08 20:06:25.333512] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:53.820 20:06:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:53.820 20:06:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:53.820 20:06:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:53.820 20:06:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:53.820 20:06:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.820 20:06:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.820 BaseBdev1_malloc 00:10:53.820 20:06:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.820 20:06:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:53.820 20:06:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.820 20:06:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.820 true 00:10:53.820 20:06:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.820 20:06:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:53.820 20:06:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.820 20:06:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.820 [2024-12-08 20:06:25.741721] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:53.820 [2024-12-08 20:06:25.741831] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:53.820 [2024-12-08 20:06:25.741867] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:53.820 [2024-12-08 20:06:25.741915] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:53.820 [2024-12-08 20:06:25.743984] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:53.820 [2024-12-08 20:06:25.744059] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:53.820 BaseBdev1 00:10:53.820 20:06:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.820 20:06:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:53.820 20:06:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:53.820 20:06:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.820 20:06:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.820 BaseBdev2_malloc 00:10:53.820 20:06:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.820 20:06:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:53.820 20:06:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.820 20:06:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.081 true 00:10:54.081 20:06:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.081 20:06:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:54.081 20:06:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.081 20:06:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.081 [2024-12-08 20:06:25.808415] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:54.081 [2024-12-08 20:06:25.808466] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:54.081 [2024-12-08 20:06:25.808482] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:54.081 [2024-12-08 20:06:25.808493] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:54.081 [2024-12-08 20:06:25.810500] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:54.081 [2024-12-08 20:06:25.810587] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:54.081 BaseBdev2 00:10:54.081 20:06:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.081 20:06:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:54.081 20:06:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:54.081 20:06:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.081 20:06:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.081 BaseBdev3_malloc 00:10:54.081 20:06:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.081 20:06:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:54.081 20:06:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.081 20:06:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.081 true 00:10:54.081 20:06:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.081 20:06:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:54.081 20:06:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.081 20:06:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.081 [2024-12-08 20:06:25.908998] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:54.081 [2024-12-08 20:06:25.909045] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:54.081 [2024-12-08 20:06:25.909062] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:54.081 [2024-12-08 20:06:25.909073] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:54.081 [2024-12-08 20:06:25.911189] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:54.081 [2024-12-08 20:06:25.911273] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:54.081 BaseBdev3 00:10:54.081 20:06:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.081 20:06:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:54.081 20:06:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:10:54.081 20:06:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.081 20:06:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.081 BaseBdev4_malloc 00:10:54.081 20:06:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.081 20:06:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:10:54.081 20:06:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.081 20:06:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.081 true 00:10:54.081 20:06:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.081 20:06:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:10:54.081 20:06:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.081 20:06:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.081 [2024-12-08 20:06:25.975328] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:10:54.081 [2024-12-08 20:06:25.975415] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:54.081 [2024-12-08 20:06:25.975453] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:54.081 [2024-12-08 20:06:25.975464] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:54.081 [2024-12-08 20:06:25.977580] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:54.081 [2024-12-08 20:06:25.977620] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:10:54.081 BaseBdev4 00:10:54.081 20:06:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.081 20:06:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:10:54.081 20:06:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.081 20:06:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.081 [2024-12-08 20:06:25.987372] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:54.081 [2024-12-08 20:06:25.989116] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:54.081 [2024-12-08 20:06:25.989189] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:54.081 [2024-12-08 20:06:25.989248] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:54.081 [2024-12-08 20:06:25.989467] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:10:54.081 [2024-12-08 20:06:25.989483] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:54.081 [2024-12-08 20:06:25.989724] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:10:54.081 [2024-12-08 20:06:25.989875] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:10:54.081 [2024-12-08 20:06:25.989886] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:10:54.081 [2024-12-08 20:06:25.990044] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:54.081 20:06:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.081 20:06:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:10:54.081 20:06:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:54.081 20:06:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:54.081 20:06:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:54.081 20:06:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:54.081 20:06:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:54.081 20:06:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:54.081 20:06:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:54.081 20:06:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:54.081 20:06:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:54.081 20:06:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:54.081 20:06:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:54.081 20:06:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.081 20:06:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.081 20:06:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.081 20:06:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:54.081 "name": "raid_bdev1", 00:10:54.081 "uuid": "23b1adfe-847e-4a17-a5fe-ef6a5a4ddb5f", 00:10:54.081 "strip_size_kb": 64, 00:10:54.081 "state": "online", 00:10:54.081 "raid_level": "concat", 00:10:54.081 "superblock": true, 00:10:54.081 "num_base_bdevs": 4, 00:10:54.081 "num_base_bdevs_discovered": 4, 00:10:54.081 "num_base_bdevs_operational": 4, 00:10:54.081 "base_bdevs_list": [ 00:10:54.081 { 00:10:54.081 "name": "BaseBdev1", 00:10:54.081 "uuid": "0299dbb9-abb1-5292-ad63-569295cbaac4", 00:10:54.081 "is_configured": true, 00:10:54.081 "data_offset": 2048, 00:10:54.081 "data_size": 63488 00:10:54.081 }, 00:10:54.081 { 00:10:54.081 "name": "BaseBdev2", 00:10:54.081 "uuid": "4d47570d-1608-5272-a1ee-dd951a24aa32", 00:10:54.081 "is_configured": true, 00:10:54.081 "data_offset": 2048, 00:10:54.081 "data_size": 63488 00:10:54.081 }, 00:10:54.081 { 00:10:54.081 "name": "BaseBdev3", 00:10:54.081 "uuid": "ad3678cb-fd15-5282-9277-41c3dfdc766d", 00:10:54.081 "is_configured": true, 00:10:54.081 "data_offset": 2048, 00:10:54.081 "data_size": 63488 00:10:54.081 }, 00:10:54.081 { 00:10:54.081 "name": "BaseBdev4", 00:10:54.081 "uuid": "621850c7-721d-5c54-a2bf-60f31497a16c", 00:10:54.081 "is_configured": true, 00:10:54.081 "data_offset": 2048, 00:10:54.081 "data_size": 63488 00:10:54.081 } 00:10:54.081 ] 00:10:54.081 }' 00:10:54.081 20:06:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:54.081 20:06:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.649 20:06:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:54.649 20:06:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:54.649 [2024-12-08 20:06:26.535778] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:10:55.587 20:06:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:55.587 20:06:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.587 20:06:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.587 20:06:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.587 20:06:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:55.587 20:06:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:10:55.587 20:06:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:10:55.587 20:06:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:10:55.587 20:06:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:55.587 20:06:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:55.587 20:06:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:55.587 20:06:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:55.587 20:06:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:55.587 20:06:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:55.587 20:06:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:55.587 20:06:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:55.587 20:06:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:55.587 20:06:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.587 20:06:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:55.587 20:06:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.587 20:06:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.587 20:06:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.587 20:06:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:55.587 "name": "raid_bdev1", 00:10:55.587 "uuid": "23b1adfe-847e-4a17-a5fe-ef6a5a4ddb5f", 00:10:55.587 "strip_size_kb": 64, 00:10:55.587 "state": "online", 00:10:55.587 "raid_level": "concat", 00:10:55.587 "superblock": true, 00:10:55.587 "num_base_bdevs": 4, 00:10:55.587 "num_base_bdevs_discovered": 4, 00:10:55.587 "num_base_bdevs_operational": 4, 00:10:55.587 "base_bdevs_list": [ 00:10:55.587 { 00:10:55.587 "name": "BaseBdev1", 00:10:55.587 "uuid": "0299dbb9-abb1-5292-ad63-569295cbaac4", 00:10:55.587 "is_configured": true, 00:10:55.587 "data_offset": 2048, 00:10:55.587 "data_size": 63488 00:10:55.587 }, 00:10:55.587 { 00:10:55.587 "name": "BaseBdev2", 00:10:55.587 "uuid": "4d47570d-1608-5272-a1ee-dd951a24aa32", 00:10:55.587 "is_configured": true, 00:10:55.587 "data_offset": 2048, 00:10:55.587 "data_size": 63488 00:10:55.587 }, 00:10:55.587 { 00:10:55.587 "name": "BaseBdev3", 00:10:55.587 "uuid": "ad3678cb-fd15-5282-9277-41c3dfdc766d", 00:10:55.587 "is_configured": true, 00:10:55.587 "data_offset": 2048, 00:10:55.587 "data_size": 63488 00:10:55.587 }, 00:10:55.587 { 00:10:55.587 "name": "BaseBdev4", 00:10:55.587 "uuid": "621850c7-721d-5c54-a2bf-60f31497a16c", 00:10:55.587 "is_configured": true, 00:10:55.587 "data_offset": 2048, 00:10:55.587 "data_size": 63488 00:10:55.587 } 00:10:55.587 ] 00:10:55.587 }' 00:10:55.587 20:06:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:55.587 20:06:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.157 20:06:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:56.157 20:06:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.157 20:06:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.157 [2024-12-08 20:06:27.843430] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:56.157 [2024-12-08 20:06:27.843524] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:56.157 [2024-12-08 20:06:27.846227] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:56.157 [2024-12-08 20:06:27.846354] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:56.157 [2024-12-08 20:06:27.846439] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:56.157 [2024-12-08 20:06:27.846486] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:10:56.157 { 00:10:56.157 "results": [ 00:10:56.157 { 00:10:56.157 "job": "raid_bdev1", 00:10:56.157 "core_mask": "0x1", 00:10:56.157 "workload": "randrw", 00:10:56.157 "percentage": 50, 00:10:56.157 "status": "finished", 00:10:56.157 "queue_depth": 1, 00:10:56.157 "io_size": 131072, 00:10:56.157 "runtime": 1.308434, 00:10:56.157 "iops": 15553.70771471851, 00:10:56.157 "mibps": 1944.2134643398138, 00:10:56.157 "io_failed": 1, 00:10:56.157 "io_timeout": 0, 00:10:56.157 "avg_latency_us": 89.31991156518633, 00:10:56.157 "min_latency_us": 25.823580786026202, 00:10:56.157 "max_latency_us": 1352.216593886463 00:10:56.157 } 00:10:56.157 ], 00:10:56.157 "core_count": 1 00:10:56.157 } 00:10:56.157 20:06:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.157 20:06:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 72685 00:10:56.157 20:06:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 72685 ']' 00:10:56.157 20:06:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 72685 00:10:56.157 20:06:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:10:56.157 20:06:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:56.157 20:06:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72685 00:10:56.157 20:06:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:56.157 20:06:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:56.157 20:06:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72685' 00:10:56.157 killing process with pid 72685 00:10:56.157 20:06:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 72685 00:10:56.157 [2024-12-08 20:06:27.890905] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:56.157 20:06:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 72685 00:10:56.416 [2024-12-08 20:06:28.202821] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:57.357 20:06:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:57.357 20:06:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.0J46PT9NnN 00:10:57.357 20:06:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:57.617 20:06:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.76 00:10:57.617 20:06:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:10:57.617 20:06:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:57.617 20:06:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:57.617 20:06:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.76 != \0\.\0\0 ]] 00:10:57.617 ************************************ 00:10:57.617 END TEST raid_read_error_test 00:10:57.617 ************************************ 00:10:57.617 00:10:57.617 real 0m4.600s 00:10:57.617 user 0m5.375s 00:10:57.617 sys 0m0.590s 00:10:57.617 20:06:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:57.617 20:06:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.617 20:06:29 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:10:57.617 20:06:29 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:57.617 20:06:29 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:57.618 20:06:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:57.618 ************************************ 00:10:57.618 START TEST raid_write_error_test 00:10:57.618 ************************************ 00:10:57.618 20:06:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 4 write 00:10:57.618 20:06:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:10:57.618 20:06:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:10:57.618 20:06:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:10:57.618 20:06:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:57.618 20:06:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:57.618 20:06:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:57.618 20:06:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:57.618 20:06:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:57.618 20:06:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:57.618 20:06:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:57.618 20:06:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:57.618 20:06:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:57.618 20:06:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:57.618 20:06:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:57.618 20:06:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:10:57.618 20:06:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:57.618 20:06:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:57.618 20:06:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:57.618 20:06:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:57.618 20:06:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:57.618 20:06:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:57.618 20:06:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:57.618 20:06:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:57.618 20:06:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:57.618 20:06:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:10:57.618 20:06:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:57.618 20:06:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:57.618 20:06:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:57.618 20:06:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.X23HugWwQX 00:10:57.618 20:06:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=72834 00:10:57.618 20:06:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:57.618 20:06:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 72834 00:10:57.618 20:06:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 72834 ']' 00:10:57.618 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:57.618 20:06:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:57.618 20:06:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:57.618 20:06:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:57.618 20:06:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:57.618 20:06:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.618 [2024-12-08 20:06:29.514367] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:10:57.618 [2024-12-08 20:06:29.514474] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72834 ] 00:10:57.878 [2024-12-08 20:06:29.688707] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:57.878 [2024-12-08 20:06:29.806316] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:58.137 [2024-12-08 20:06:30.008326] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:58.137 [2024-12-08 20:06:30.008456] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:58.397 20:06:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:58.397 20:06:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:58.397 20:06:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:58.397 20:06:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:58.397 20:06:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.397 20:06:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.657 BaseBdev1_malloc 00:10:58.657 20:06:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.657 20:06:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:58.657 20:06:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.657 20:06:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.657 true 00:10:58.657 20:06:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.657 20:06:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:58.657 20:06:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.657 20:06:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.657 [2024-12-08 20:06:30.391825] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:58.657 [2024-12-08 20:06:30.391882] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:58.657 [2024-12-08 20:06:30.391916] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:58.657 [2024-12-08 20:06:30.391926] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:58.657 [2024-12-08 20:06:30.393945] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:58.657 [2024-12-08 20:06:30.394014] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:58.657 BaseBdev1 00:10:58.657 20:06:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.657 20:06:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:58.657 20:06:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:58.657 20:06:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.657 20:06:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.657 BaseBdev2_malloc 00:10:58.657 20:06:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.657 20:06:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:58.657 20:06:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.657 20:06:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.657 true 00:10:58.657 20:06:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.657 20:06:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:58.657 20:06:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.657 20:06:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.657 [2024-12-08 20:06:30.458305] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:58.657 [2024-12-08 20:06:30.458359] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:58.657 [2024-12-08 20:06:30.458375] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:58.657 [2024-12-08 20:06:30.458385] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:58.657 [2024-12-08 20:06:30.460490] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:58.657 [2024-12-08 20:06:30.460530] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:58.657 BaseBdev2 00:10:58.657 20:06:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.657 20:06:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:58.657 20:06:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:58.657 20:06:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.657 20:06:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.657 BaseBdev3_malloc 00:10:58.657 20:06:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.657 20:06:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:58.657 20:06:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.657 20:06:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.657 true 00:10:58.657 20:06:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.657 20:06:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:58.657 20:06:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.657 20:06:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.657 [2024-12-08 20:06:30.535079] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:58.657 [2024-12-08 20:06:30.535133] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:58.657 [2024-12-08 20:06:30.535162] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:58.657 [2024-12-08 20:06:30.535176] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:58.657 [2024-12-08 20:06:30.537332] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:58.658 [2024-12-08 20:06:30.537390] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:58.658 BaseBdev3 00:10:58.658 20:06:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.658 20:06:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:58.658 20:06:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:10:58.658 20:06:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.658 20:06:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.658 BaseBdev4_malloc 00:10:58.658 20:06:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.658 20:06:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:10:58.658 20:06:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.658 20:06:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.658 true 00:10:58.658 20:06:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.658 20:06:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:10:58.658 20:06:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.658 20:06:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.658 [2024-12-08 20:06:30.600879] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:10:58.658 [2024-12-08 20:06:30.600932] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:58.658 [2024-12-08 20:06:30.600961] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:58.658 [2024-12-08 20:06:30.600971] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:58.658 [2024-12-08 20:06:30.603023] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:58.658 [2024-12-08 20:06:30.603060] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:10:58.658 BaseBdev4 00:10:58.658 20:06:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.658 20:06:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:10:58.658 20:06:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.658 20:06:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.658 [2024-12-08 20:06:30.612926] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:58.658 [2024-12-08 20:06:30.614732] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:58.658 [2024-12-08 20:06:30.614804] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:58.658 [2024-12-08 20:06:30.614861] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:58.658 [2024-12-08 20:06:30.615081] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:10:58.658 [2024-12-08 20:06:30.615098] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:58.658 [2024-12-08 20:06:30.615389] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:10:58.658 [2024-12-08 20:06:30.615569] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:10:58.658 [2024-12-08 20:06:30.615581] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:10:58.658 [2024-12-08 20:06:30.615770] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:58.658 20:06:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.658 20:06:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:10:58.658 20:06:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:58.658 20:06:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:58.658 20:06:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:58.658 20:06:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:58.658 20:06:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:58.658 20:06:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:58.658 20:06:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:58.658 20:06:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:58.658 20:06:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:58.658 20:06:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.658 20:06:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:58.658 20:06:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.658 20:06:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.918 20:06:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.918 20:06:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:58.918 "name": "raid_bdev1", 00:10:58.918 "uuid": "7e2bc1af-bbd3-4bca-aaec-b14597a17ff1", 00:10:58.918 "strip_size_kb": 64, 00:10:58.918 "state": "online", 00:10:58.918 "raid_level": "concat", 00:10:58.918 "superblock": true, 00:10:58.918 "num_base_bdevs": 4, 00:10:58.918 "num_base_bdevs_discovered": 4, 00:10:58.918 "num_base_bdevs_operational": 4, 00:10:58.918 "base_bdevs_list": [ 00:10:58.918 { 00:10:58.918 "name": "BaseBdev1", 00:10:58.918 "uuid": "341e670f-4de0-53b7-a311-b58e01456446", 00:10:58.918 "is_configured": true, 00:10:58.918 "data_offset": 2048, 00:10:58.918 "data_size": 63488 00:10:58.918 }, 00:10:58.918 { 00:10:58.918 "name": "BaseBdev2", 00:10:58.918 "uuid": "26b481a1-9dc2-5a3c-b202-79e6dc0b140f", 00:10:58.918 "is_configured": true, 00:10:58.918 "data_offset": 2048, 00:10:58.918 "data_size": 63488 00:10:58.918 }, 00:10:58.918 { 00:10:58.918 "name": "BaseBdev3", 00:10:58.918 "uuid": "23014cb7-8d16-5f64-a485-f70d26e8782c", 00:10:58.918 "is_configured": true, 00:10:58.918 "data_offset": 2048, 00:10:58.918 "data_size": 63488 00:10:58.918 }, 00:10:58.918 { 00:10:58.918 "name": "BaseBdev4", 00:10:58.918 "uuid": "d9f10940-4a59-52ac-a110-44813de5bec5", 00:10:58.918 "is_configured": true, 00:10:58.918 "data_offset": 2048, 00:10:58.918 "data_size": 63488 00:10:58.918 } 00:10:58.918 ] 00:10:58.918 }' 00:10:58.918 20:06:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:58.918 20:06:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.178 20:06:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:59.178 20:06:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:59.178 [2024-12-08 20:06:31.077417] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:11:00.119 20:06:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:00.119 20:06:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.119 20:06:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.119 20:06:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.119 20:06:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:00.119 20:06:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:11:00.119 20:06:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:11:00.119 20:06:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:00.119 20:06:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:00.119 20:06:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:00.119 20:06:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:00.119 20:06:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:00.119 20:06:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:00.119 20:06:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:00.119 20:06:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:00.119 20:06:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:00.119 20:06:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:00.119 20:06:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.119 20:06:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:00.119 20:06:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.119 20:06:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.119 20:06:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.119 20:06:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:00.119 "name": "raid_bdev1", 00:11:00.119 "uuid": "7e2bc1af-bbd3-4bca-aaec-b14597a17ff1", 00:11:00.119 "strip_size_kb": 64, 00:11:00.119 "state": "online", 00:11:00.119 "raid_level": "concat", 00:11:00.119 "superblock": true, 00:11:00.119 "num_base_bdevs": 4, 00:11:00.119 "num_base_bdevs_discovered": 4, 00:11:00.119 "num_base_bdevs_operational": 4, 00:11:00.119 "base_bdevs_list": [ 00:11:00.119 { 00:11:00.119 "name": "BaseBdev1", 00:11:00.119 "uuid": "341e670f-4de0-53b7-a311-b58e01456446", 00:11:00.119 "is_configured": true, 00:11:00.119 "data_offset": 2048, 00:11:00.119 "data_size": 63488 00:11:00.119 }, 00:11:00.119 { 00:11:00.119 "name": "BaseBdev2", 00:11:00.119 "uuid": "26b481a1-9dc2-5a3c-b202-79e6dc0b140f", 00:11:00.119 "is_configured": true, 00:11:00.119 "data_offset": 2048, 00:11:00.119 "data_size": 63488 00:11:00.119 }, 00:11:00.119 { 00:11:00.119 "name": "BaseBdev3", 00:11:00.119 "uuid": "23014cb7-8d16-5f64-a485-f70d26e8782c", 00:11:00.119 "is_configured": true, 00:11:00.119 "data_offset": 2048, 00:11:00.119 "data_size": 63488 00:11:00.119 }, 00:11:00.119 { 00:11:00.119 "name": "BaseBdev4", 00:11:00.119 "uuid": "d9f10940-4a59-52ac-a110-44813de5bec5", 00:11:00.119 "is_configured": true, 00:11:00.119 "data_offset": 2048, 00:11:00.119 "data_size": 63488 00:11:00.119 } 00:11:00.119 ] 00:11:00.119 }' 00:11:00.119 20:06:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:00.119 20:06:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.764 20:06:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:00.764 20:06:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.764 20:06:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.764 [2024-12-08 20:06:32.417390] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:00.764 [2024-12-08 20:06:32.417500] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:00.764 [2024-12-08 20:06:32.420720] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:00.764 [2024-12-08 20:06:32.420863] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:00.764 [2024-12-08 20:06:32.420971] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:00.764 [2024-12-08 20:06:32.421028] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:11:00.764 { 00:11:00.764 "results": [ 00:11:00.764 { 00:11:00.764 "job": "raid_bdev1", 00:11:00.764 "core_mask": "0x1", 00:11:00.764 "workload": "randrw", 00:11:00.764 "percentage": 50, 00:11:00.764 "status": "finished", 00:11:00.764 "queue_depth": 1, 00:11:00.764 "io_size": 131072, 00:11:00.764 "runtime": 1.340879, 00:11:00.764 "iops": 15451.058596636982, 00:11:00.764 "mibps": 1931.3823245796227, 00:11:00.764 "io_failed": 1, 00:11:00.764 "io_timeout": 0, 00:11:00.764 "avg_latency_us": 89.87111288058911, 00:11:00.764 "min_latency_us": 26.382532751091702, 00:11:00.764 "max_latency_us": 1516.7720524017468 00:11:00.764 } 00:11:00.764 ], 00:11:00.764 "core_count": 1 00:11:00.764 } 00:11:00.764 20:06:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.764 20:06:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 72834 00:11:00.764 20:06:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 72834 ']' 00:11:00.764 20:06:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 72834 00:11:00.764 20:06:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:11:00.764 20:06:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:00.764 20:06:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72834 00:11:00.764 20:06:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:00.765 killing process with pid 72834 00:11:00.765 20:06:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:00.765 20:06:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72834' 00:11:00.765 20:06:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 72834 00:11:00.765 [2024-12-08 20:06:32.454856] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:00.765 20:06:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 72834 00:11:01.026 [2024-12-08 20:06:32.769437] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:01.965 20:06:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.X23HugWwQX 00:11:01.965 20:06:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:01.965 20:06:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:01.965 20:06:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.75 00:11:01.965 20:06:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:11:01.965 20:06:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:01.965 20:06:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:01.965 20:06:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.75 != \0\.\0\0 ]] 00:11:01.965 00:11:01.965 real 0m4.470s 00:11:01.965 user 0m5.172s 00:11:01.965 sys 0m0.558s 00:11:01.965 ************************************ 00:11:01.965 END TEST raid_write_error_test 00:11:01.965 ************************************ 00:11:01.965 20:06:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:01.965 20:06:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.965 20:06:33 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:11:01.965 20:06:33 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:11:01.965 20:06:33 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:01.965 20:06:33 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:01.965 20:06:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:02.225 ************************************ 00:11:02.225 START TEST raid_state_function_test 00:11:02.225 ************************************ 00:11:02.225 20:06:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 4 false 00:11:02.225 20:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:11:02.225 20:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:11:02.225 20:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:11:02.225 20:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:02.225 20:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:02.225 20:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:02.225 20:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:02.225 20:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:02.225 20:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:02.225 20:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:02.225 20:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:02.225 20:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:02.225 20:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:02.225 20:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:02.225 20:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:02.225 20:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:11:02.225 20:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:02.225 20:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:02.225 20:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:02.225 20:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:02.225 20:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:02.225 20:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:02.225 20:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:02.225 20:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:02.225 20:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:11:02.225 20:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:11:02.225 20:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:11:02.225 20:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:11:02.225 20:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=72976 00:11:02.225 20:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:02.225 20:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 72976' 00:11:02.225 Process raid pid: 72976 00:11:02.225 20:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 72976 00:11:02.225 20:06:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 72976 ']' 00:11:02.225 20:06:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:02.225 20:06:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:02.225 20:06:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:02.225 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:02.225 20:06:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:02.225 20:06:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.225 [2024-12-08 20:06:34.043702] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:11:02.225 [2024-12-08 20:06:34.043906] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:02.485 [2024-12-08 20:06:34.216239] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:02.485 [2024-12-08 20:06:34.324248] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:02.745 [2024-12-08 20:06:34.522733] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:02.745 [2024-12-08 20:06:34.522850] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:03.005 20:06:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:03.005 20:06:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:11:03.005 20:06:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:03.005 20:06:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.005 20:06:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.005 [2024-12-08 20:06:34.871523] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:03.005 [2024-12-08 20:06:34.871585] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:03.005 [2024-12-08 20:06:34.871601] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:03.005 [2024-12-08 20:06:34.871611] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:03.005 [2024-12-08 20:06:34.871617] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:03.005 [2024-12-08 20:06:34.871625] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:03.005 [2024-12-08 20:06:34.871631] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:03.005 [2024-12-08 20:06:34.871640] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:03.005 20:06:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.005 20:06:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:03.005 20:06:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:03.005 20:06:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:03.005 20:06:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:03.005 20:06:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:03.005 20:06:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:03.005 20:06:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:03.005 20:06:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:03.005 20:06:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:03.005 20:06:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:03.005 20:06:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:03.005 20:06:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:03.005 20:06:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.005 20:06:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.005 20:06:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.005 20:06:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:03.005 "name": "Existed_Raid", 00:11:03.005 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:03.005 "strip_size_kb": 0, 00:11:03.005 "state": "configuring", 00:11:03.005 "raid_level": "raid1", 00:11:03.005 "superblock": false, 00:11:03.005 "num_base_bdevs": 4, 00:11:03.005 "num_base_bdevs_discovered": 0, 00:11:03.005 "num_base_bdevs_operational": 4, 00:11:03.005 "base_bdevs_list": [ 00:11:03.005 { 00:11:03.005 "name": "BaseBdev1", 00:11:03.005 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:03.005 "is_configured": false, 00:11:03.005 "data_offset": 0, 00:11:03.005 "data_size": 0 00:11:03.005 }, 00:11:03.005 { 00:11:03.005 "name": "BaseBdev2", 00:11:03.005 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:03.005 "is_configured": false, 00:11:03.005 "data_offset": 0, 00:11:03.005 "data_size": 0 00:11:03.005 }, 00:11:03.005 { 00:11:03.005 "name": "BaseBdev3", 00:11:03.005 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:03.006 "is_configured": false, 00:11:03.006 "data_offset": 0, 00:11:03.006 "data_size": 0 00:11:03.006 }, 00:11:03.006 { 00:11:03.006 "name": "BaseBdev4", 00:11:03.006 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:03.006 "is_configured": false, 00:11:03.006 "data_offset": 0, 00:11:03.006 "data_size": 0 00:11:03.006 } 00:11:03.006 ] 00:11:03.006 }' 00:11:03.006 20:06:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:03.006 20:06:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.576 20:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:03.576 20:06:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.576 20:06:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.576 [2024-12-08 20:06:35.274813] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:03.576 [2024-12-08 20:06:35.274901] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:03.576 20:06:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.576 20:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:03.576 20:06:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.576 20:06:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.576 [2024-12-08 20:06:35.286776] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:03.576 [2024-12-08 20:06:35.286856] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:03.576 [2024-12-08 20:06:35.286884] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:03.576 [2024-12-08 20:06:35.286906] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:03.576 [2024-12-08 20:06:35.286924] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:03.576 [2024-12-08 20:06:35.286957] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:03.576 [2024-12-08 20:06:35.286976] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:03.576 [2024-12-08 20:06:35.287041] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:03.576 20:06:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.576 20:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:03.576 20:06:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.576 20:06:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.576 [2024-12-08 20:06:35.333077] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:03.576 BaseBdev1 00:11:03.576 20:06:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.576 20:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:03.576 20:06:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:03.576 20:06:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:03.576 20:06:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:03.576 20:06:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:03.576 20:06:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:03.576 20:06:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:03.577 20:06:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.577 20:06:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.577 20:06:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.577 20:06:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:03.577 20:06:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.577 20:06:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.577 [ 00:11:03.577 { 00:11:03.577 "name": "BaseBdev1", 00:11:03.577 "aliases": [ 00:11:03.577 "b118180b-dac8-43be-b22d-8ba7bda3901f" 00:11:03.577 ], 00:11:03.577 "product_name": "Malloc disk", 00:11:03.577 "block_size": 512, 00:11:03.577 "num_blocks": 65536, 00:11:03.577 "uuid": "b118180b-dac8-43be-b22d-8ba7bda3901f", 00:11:03.577 "assigned_rate_limits": { 00:11:03.577 "rw_ios_per_sec": 0, 00:11:03.577 "rw_mbytes_per_sec": 0, 00:11:03.577 "r_mbytes_per_sec": 0, 00:11:03.577 "w_mbytes_per_sec": 0 00:11:03.577 }, 00:11:03.577 "claimed": true, 00:11:03.577 "claim_type": "exclusive_write", 00:11:03.577 "zoned": false, 00:11:03.577 "supported_io_types": { 00:11:03.577 "read": true, 00:11:03.577 "write": true, 00:11:03.577 "unmap": true, 00:11:03.577 "flush": true, 00:11:03.577 "reset": true, 00:11:03.577 "nvme_admin": false, 00:11:03.577 "nvme_io": false, 00:11:03.577 "nvme_io_md": false, 00:11:03.577 "write_zeroes": true, 00:11:03.577 "zcopy": true, 00:11:03.577 "get_zone_info": false, 00:11:03.577 "zone_management": false, 00:11:03.577 "zone_append": false, 00:11:03.577 "compare": false, 00:11:03.577 "compare_and_write": false, 00:11:03.577 "abort": true, 00:11:03.577 "seek_hole": false, 00:11:03.577 "seek_data": false, 00:11:03.577 "copy": true, 00:11:03.577 "nvme_iov_md": false 00:11:03.577 }, 00:11:03.577 "memory_domains": [ 00:11:03.577 { 00:11:03.577 "dma_device_id": "system", 00:11:03.577 "dma_device_type": 1 00:11:03.577 }, 00:11:03.577 { 00:11:03.577 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:03.577 "dma_device_type": 2 00:11:03.577 } 00:11:03.577 ], 00:11:03.577 "driver_specific": {} 00:11:03.577 } 00:11:03.577 ] 00:11:03.577 20:06:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.577 20:06:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:03.577 20:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:03.577 20:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:03.577 20:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:03.577 20:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:03.577 20:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:03.577 20:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:03.577 20:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:03.577 20:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:03.577 20:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:03.577 20:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:03.577 20:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:03.577 20:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:03.577 20:06:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.577 20:06:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.577 20:06:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.577 20:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:03.577 "name": "Existed_Raid", 00:11:03.577 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:03.577 "strip_size_kb": 0, 00:11:03.577 "state": "configuring", 00:11:03.577 "raid_level": "raid1", 00:11:03.577 "superblock": false, 00:11:03.577 "num_base_bdevs": 4, 00:11:03.577 "num_base_bdevs_discovered": 1, 00:11:03.577 "num_base_bdevs_operational": 4, 00:11:03.577 "base_bdevs_list": [ 00:11:03.577 { 00:11:03.577 "name": "BaseBdev1", 00:11:03.577 "uuid": "b118180b-dac8-43be-b22d-8ba7bda3901f", 00:11:03.577 "is_configured": true, 00:11:03.577 "data_offset": 0, 00:11:03.577 "data_size": 65536 00:11:03.577 }, 00:11:03.577 { 00:11:03.577 "name": "BaseBdev2", 00:11:03.577 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:03.577 "is_configured": false, 00:11:03.577 "data_offset": 0, 00:11:03.577 "data_size": 0 00:11:03.577 }, 00:11:03.577 { 00:11:03.577 "name": "BaseBdev3", 00:11:03.577 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:03.577 "is_configured": false, 00:11:03.577 "data_offset": 0, 00:11:03.577 "data_size": 0 00:11:03.577 }, 00:11:03.577 { 00:11:03.577 "name": "BaseBdev4", 00:11:03.577 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:03.577 "is_configured": false, 00:11:03.577 "data_offset": 0, 00:11:03.577 "data_size": 0 00:11:03.577 } 00:11:03.577 ] 00:11:03.577 }' 00:11:03.577 20:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:03.577 20:06:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.837 20:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:03.837 20:06:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.837 20:06:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.837 [2024-12-08 20:06:35.736477] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:03.837 [2024-12-08 20:06:35.736531] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:03.837 20:06:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.837 20:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:03.838 20:06:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.838 20:06:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.838 [2024-12-08 20:06:35.748489] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:03.838 [2024-12-08 20:06:35.750302] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:03.838 [2024-12-08 20:06:35.750390] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:03.838 [2024-12-08 20:06:35.750417] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:03.838 [2024-12-08 20:06:35.750429] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:03.838 [2024-12-08 20:06:35.750436] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:03.838 [2024-12-08 20:06:35.750444] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:03.838 20:06:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.838 20:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:03.838 20:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:03.838 20:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:03.838 20:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:03.838 20:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:03.838 20:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:03.838 20:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:03.838 20:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:03.838 20:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:03.838 20:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:03.838 20:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:03.838 20:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:03.838 20:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:03.838 20:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:03.838 20:06:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.838 20:06:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.838 20:06:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.838 20:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:03.838 "name": "Existed_Raid", 00:11:03.838 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:03.838 "strip_size_kb": 0, 00:11:03.838 "state": "configuring", 00:11:03.838 "raid_level": "raid1", 00:11:03.838 "superblock": false, 00:11:03.838 "num_base_bdevs": 4, 00:11:03.838 "num_base_bdevs_discovered": 1, 00:11:03.838 "num_base_bdevs_operational": 4, 00:11:03.838 "base_bdevs_list": [ 00:11:03.838 { 00:11:03.838 "name": "BaseBdev1", 00:11:03.838 "uuid": "b118180b-dac8-43be-b22d-8ba7bda3901f", 00:11:03.838 "is_configured": true, 00:11:03.838 "data_offset": 0, 00:11:03.838 "data_size": 65536 00:11:03.838 }, 00:11:03.838 { 00:11:03.838 "name": "BaseBdev2", 00:11:03.838 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:03.838 "is_configured": false, 00:11:03.838 "data_offset": 0, 00:11:03.838 "data_size": 0 00:11:03.838 }, 00:11:03.838 { 00:11:03.838 "name": "BaseBdev3", 00:11:03.838 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:03.838 "is_configured": false, 00:11:03.838 "data_offset": 0, 00:11:03.838 "data_size": 0 00:11:03.838 }, 00:11:03.838 { 00:11:03.838 "name": "BaseBdev4", 00:11:03.838 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:03.838 "is_configured": false, 00:11:03.838 "data_offset": 0, 00:11:03.838 "data_size": 0 00:11:03.838 } 00:11:03.838 ] 00:11:03.838 }' 00:11:03.838 20:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:03.838 20:06:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.408 20:06:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:04.408 20:06:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.408 20:06:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.408 [2024-12-08 20:06:36.259997] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:04.408 BaseBdev2 00:11:04.408 20:06:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.408 20:06:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:04.408 20:06:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:04.408 20:06:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:04.408 20:06:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:04.408 20:06:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:04.408 20:06:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:04.408 20:06:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:04.408 20:06:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.408 20:06:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.408 20:06:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.408 20:06:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:04.408 20:06:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.408 20:06:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.408 [ 00:11:04.408 { 00:11:04.408 "name": "BaseBdev2", 00:11:04.408 "aliases": [ 00:11:04.408 "52230d87-d087-4b40-9885-e315c7561ce5" 00:11:04.408 ], 00:11:04.408 "product_name": "Malloc disk", 00:11:04.408 "block_size": 512, 00:11:04.408 "num_blocks": 65536, 00:11:04.408 "uuid": "52230d87-d087-4b40-9885-e315c7561ce5", 00:11:04.408 "assigned_rate_limits": { 00:11:04.408 "rw_ios_per_sec": 0, 00:11:04.408 "rw_mbytes_per_sec": 0, 00:11:04.408 "r_mbytes_per_sec": 0, 00:11:04.408 "w_mbytes_per_sec": 0 00:11:04.408 }, 00:11:04.408 "claimed": true, 00:11:04.408 "claim_type": "exclusive_write", 00:11:04.409 "zoned": false, 00:11:04.409 "supported_io_types": { 00:11:04.409 "read": true, 00:11:04.409 "write": true, 00:11:04.409 "unmap": true, 00:11:04.409 "flush": true, 00:11:04.409 "reset": true, 00:11:04.409 "nvme_admin": false, 00:11:04.409 "nvme_io": false, 00:11:04.409 "nvme_io_md": false, 00:11:04.409 "write_zeroes": true, 00:11:04.409 "zcopy": true, 00:11:04.409 "get_zone_info": false, 00:11:04.409 "zone_management": false, 00:11:04.409 "zone_append": false, 00:11:04.409 "compare": false, 00:11:04.409 "compare_and_write": false, 00:11:04.409 "abort": true, 00:11:04.409 "seek_hole": false, 00:11:04.409 "seek_data": false, 00:11:04.409 "copy": true, 00:11:04.409 "nvme_iov_md": false 00:11:04.409 }, 00:11:04.409 "memory_domains": [ 00:11:04.409 { 00:11:04.409 "dma_device_id": "system", 00:11:04.409 "dma_device_type": 1 00:11:04.409 }, 00:11:04.409 { 00:11:04.409 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:04.409 "dma_device_type": 2 00:11:04.409 } 00:11:04.409 ], 00:11:04.409 "driver_specific": {} 00:11:04.409 } 00:11:04.409 ] 00:11:04.409 20:06:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.409 20:06:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:04.409 20:06:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:04.409 20:06:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:04.409 20:06:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:04.409 20:06:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:04.409 20:06:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:04.409 20:06:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:04.409 20:06:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:04.409 20:06:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:04.409 20:06:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:04.409 20:06:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:04.409 20:06:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:04.409 20:06:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:04.409 20:06:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:04.409 20:06:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:04.409 20:06:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.409 20:06:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.409 20:06:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.409 20:06:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:04.409 "name": "Existed_Raid", 00:11:04.409 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:04.409 "strip_size_kb": 0, 00:11:04.409 "state": "configuring", 00:11:04.409 "raid_level": "raid1", 00:11:04.409 "superblock": false, 00:11:04.409 "num_base_bdevs": 4, 00:11:04.409 "num_base_bdevs_discovered": 2, 00:11:04.409 "num_base_bdevs_operational": 4, 00:11:04.409 "base_bdevs_list": [ 00:11:04.409 { 00:11:04.409 "name": "BaseBdev1", 00:11:04.409 "uuid": "b118180b-dac8-43be-b22d-8ba7bda3901f", 00:11:04.409 "is_configured": true, 00:11:04.409 "data_offset": 0, 00:11:04.409 "data_size": 65536 00:11:04.409 }, 00:11:04.409 { 00:11:04.409 "name": "BaseBdev2", 00:11:04.409 "uuid": "52230d87-d087-4b40-9885-e315c7561ce5", 00:11:04.409 "is_configured": true, 00:11:04.409 "data_offset": 0, 00:11:04.409 "data_size": 65536 00:11:04.409 }, 00:11:04.409 { 00:11:04.409 "name": "BaseBdev3", 00:11:04.409 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:04.409 "is_configured": false, 00:11:04.409 "data_offset": 0, 00:11:04.409 "data_size": 0 00:11:04.409 }, 00:11:04.409 { 00:11:04.409 "name": "BaseBdev4", 00:11:04.409 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:04.409 "is_configured": false, 00:11:04.409 "data_offset": 0, 00:11:04.409 "data_size": 0 00:11:04.409 } 00:11:04.409 ] 00:11:04.409 }' 00:11:04.409 20:06:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:04.409 20:06:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.980 20:06:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:04.980 20:06:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.980 20:06:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.980 [2024-12-08 20:06:36.779186] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:04.980 BaseBdev3 00:11:04.980 20:06:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.980 20:06:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:04.980 20:06:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:04.980 20:06:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:04.980 20:06:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:04.980 20:06:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:04.980 20:06:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:04.980 20:06:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:04.980 20:06:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.980 20:06:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.980 20:06:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.980 20:06:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:04.980 20:06:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.980 20:06:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.980 [ 00:11:04.980 { 00:11:04.980 "name": "BaseBdev3", 00:11:04.980 "aliases": [ 00:11:04.980 "3b1cec95-dabe-4da9-bf06-3c7cbfb8b666" 00:11:04.980 ], 00:11:04.980 "product_name": "Malloc disk", 00:11:04.980 "block_size": 512, 00:11:04.980 "num_blocks": 65536, 00:11:04.980 "uuid": "3b1cec95-dabe-4da9-bf06-3c7cbfb8b666", 00:11:04.980 "assigned_rate_limits": { 00:11:04.980 "rw_ios_per_sec": 0, 00:11:04.980 "rw_mbytes_per_sec": 0, 00:11:04.980 "r_mbytes_per_sec": 0, 00:11:04.980 "w_mbytes_per_sec": 0 00:11:04.980 }, 00:11:04.980 "claimed": true, 00:11:04.980 "claim_type": "exclusive_write", 00:11:04.980 "zoned": false, 00:11:04.980 "supported_io_types": { 00:11:04.980 "read": true, 00:11:04.980 "write": true, 00:11:04.980 "unmap": true, 00:11:04.980 "flush": true, 00:11:04.980 "reset": true, 00:11:04.980 "nvme_admin": false, 00:11:04.980 "nvme_io": false, 00:11:04.980 "nvme_io_md": false, 00:11:04.980 "write_zeroes": true, 00:11:04.980 "zcopy": true, 00:11:04.980 "get_zone_info": false, 00:11:04.980 "zone_management": false, 00:11:04.980 "zone_append": false, 00:11:04.980 "compare": false, 00:11:04.980 "compare_and_write": false, 00:11:04.980 "abort": true, 00:11:04.980 "seek_hole": false, 00:11:04.980 "seek_data": false, 00:11:04.980 "copy": true, 00:11:04.980 "nvme_iov_md": false 00:11:04.980 }, 00:11:04.980 "memory_domains": [ 00:11:04.980 { 00:11:04.980 "dma_device_id": "system", 00:11:04.980 "dma_device_type": 1 00:11:04.980 }, 00:11:04.980 { 00:11:04.980 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:04.980 "dma_device_type": 2 00:11:04.980 } 00:11:04.980 ], 00:11:04.980 "driver_specific": {} 00:11:04.980 } 00:11:04.980 ] 00:11:04.980 20:06:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.980 20:06:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:04.980 20:06:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:04.980 20:06:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:04.980 20:06:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:04.980 20:06:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:04.980 20:06:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:04.980 20:06:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:04.980 20:06:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:04.980 20:06:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:04.980 20:06:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:04.980 20:06:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:04.980 20:06:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:04.980 20:06:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:04.980 20:06:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:04.980 20:06:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:04.980 20:06:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.980 20:06:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.980 20:06:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.980 20:06:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:04.980 "name": "Existed_Raid", 00:11:04.980 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:04.980 "strip_size_kb": 0, 00:11:04.980 "state": "configuring", 00:11:04.980 "raid_level": "raid1", 00:11:04.980 "superblock": false, 00:11:04.980 "num_base_bdevs": 4, 00:11:04.980 "num_base_bdevs_discovered": 3, 00:11:04.980 "num_base_bdevs_operational": 4, 00:11:04.980 "base_bdevs_list": [ 00:11:04.980 { 00:11:04.980 "name": "BaseBdev1", 00:11:04.980 "uuid": "b118180b-dac8-43be-b22d-8ba7bda3901f", 00:11:04.980 "is_configured": true, 00:11:04.980 "data_offset": 0, 00:11:04.980 "data_size": 65536 00:11:04.980 }, 00:11:04.980 { 00:11:04.980 "name": "BaseBdev2", 00:11:04.980 "uuid": "52230d87-d087-4b40-9885-e315c7561ce5", 00:11:04.980 "is_configured": true, 00:11:04.980 "data_offset": 0, 00:11:04.980 "data_size": 65536 00:11:04.980 }, 00:11:04.980 { 00:11:04.980 "name": "BaseBdev3", 00:11:04.980 "uuid": "3b1cec95-dabe-4da9-bf06-3c7cbfb8b666", 00:11:04.980 "is_configured": true, 00:11:04.980 "data_offset": 0, 00:11:04.980 "data_size": 65536 00:11:04.980 }, 00:11:04.980 { 00:11:04.980 "name": "BaseBdev4", 00:11:04.980 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:04.980 "is_configured": false, 00:11:04.980 "data_offset": 0, 00:11:04.980 "data_size": 0 00:11:04.980 } 00:11:04.980 ] 00:11:04.980 }' 00:11:04.980 20:06:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:04.980 20:06:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.550 20:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:05.550 20:06:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.550 20:06:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.550 [2024-12-08 20:06:37.292438] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:05.550 [2024-12-08 20:06:37.292536] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:05.550 [2024-12-08 20:06:37.292562] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:05.550 [2024-12-08 20:06:37.292894] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:05.550 [2024-12-08 20:06:37.293137] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:05.550 [2024-12-08 20:06:37.293191] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:05.550 [2024-12-08 20:06:37.293534] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:05.550 BaseBdev4 00:11:05.550 20:06:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.550 20:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:11:05.550 20:06:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:05.550 20:06:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:05.550 20:06:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:05.550 20:06:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:05.550 20:06:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:05.550 20:06:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:05.550 20:06:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.550 20:06:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.550 20:06:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.550 20:06:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:05.550 20:06:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.550 20:06:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.550 [ 00:11:05.550 { 00:11:05.550 "name": "BaseBdev4", 00:11:05.550 "aliases": [ 00:11:05.550 "53e4f81e-8b89-40ca-b33e-7d64f54b7d93" 00:11:05.550 ], 00:11:05.550 "product_name": "Malloc disk", 00:11:05.550 "block_size": 512, 00:11:05.550 "num_blocks": 65536, 00:11:05.550 "uuid": "53e4f81e-8b89-40ca-b33e-7d64f54b7d93", 00:11:05.550 "assigned_rate_limits": { 00:11:05.550 "rw_ios_per_sec": 0, 00:11:05.550 "rw_mbytes_per_sec": 0, 00:11:05.550 "r_mbytes_per_sec": 0, 00:11:05.550 "w_mbytes_per_sec": 0 00:11:05.550 }, 00:11:05.550 "claimed": true, 00:11:05.550 "claim_type": "exclusive_write", 00:11:05.550 "zoned": false, 00:11:05.550 "supported_io_types": { 00:11:05.550 "read": true, 00:11:05.550 "write": true, 00:11:05.550 "unmap": true, 00:11:05.550 "flush": true, 00:11:05.550 "reset": true, 00:11:05.550 "nvme_admin": false, 00:11:05.550 "nvme_io": false, 00:11:05.550 "nvme_io_md": false, 00:11:05.550 "write_zeroes": true, 00:11:05.550 "zcopy": true, 00:11:05.550 "get_zone_info": false, 00:11:05.550 "zone_management": false, 00:11:05.550 "zone_append": false, 00:11:05.550 "compare": false, 00:11:05.550 "compare_and_write": false, 00:11:05.550 "abort": true, 00:11:05.550 "seek_hole": false, 00:11:05.550 "seek_data": false, 00:11:05.550 "copy": true, 00:11:05.550 "nvme_iov_md": false 00:11:05.550 }, 00:11:05.550 "memory_domains": [ 00:11:05.550 { 00:11:05.550 "dma_device_id": "system", 00:11:05.550 "dma_device_type": 1 00:11:05.550 }, 00:11:05.550 { 00:11:05.550 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:05.550 "dma_device_type": 2 00:11:05.550 } 00:11:05.550 ], 00:11:05.550 "driver_specific": {} 00:11:05.550 } 00:11:05.550 ] 00:11:05.550 20:06:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.550 20:06:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:05.550 20:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:05.550 20:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:05.550 20:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:11:05.550 20:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:05.550 20:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:05.550 20:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:05.550 20:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:05.550 20:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:05.550 20:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:05.550 20:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:05.551 20:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:05.551 20:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:05.551 20:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:05.551 20:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:05.551 20:06:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.551 20:06:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.551 20:06:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.551 20:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:05.551 "name": "Existed_Raid", 00:11:05.551 "uuid": "bdf93d48-bd52-4a0d-ac9b-801ac9fa3568", 00:11:05.551 "strip_size_kb": 0, 00:11:05.551 "state": "online", 00:11:05.551 "raid_level": "raid1", 00:11:05.551 "superblock": false, 00:11:05.551 "num_base_bdevs": 4, 00:11:05.551 "num_base_bdevs_discovered": 4, 00:11:05.551 "num_base_bdevs_operational": 4, 00:11:05.551 "base_bdevs_list": [ 00:11:05.551 { 00:11:05.551 "name": "BaseBdev1", 00:11:05.551 "uuid": "b118180b-dac8-43be-b22d-8ba7bda3901f", 00:11:05.551 "is_configured": true, 00:11:05.551 "data_offset": 0, 00:11:05.551 "data_size": 65536 00:11:05.551 }, 00:11:05.551 { 00:11:05.551 "name": "BaseBdev2", 00:11:05.551 "uuid": "52230d87-d087-4b40-9885-e315c7561ce5", 00:11:05.551 "is_configured": true, 00:11:05.551 "data_offset": 0, 00:11:05.551 "data_size": 65536 00:11:05.551 }, 00:11:05.551 { 00:11:05.551 "name": "BaseBdev3", 00:11:05.551 "uuid": "3b1cec95-dabe-4da9-bf06-3c7cbfb8b666", 00:11:05.551 "is_configured": true, 00:11:05.551 "data_offset": 0, 00:11:05.551 "data_size": 65536 00:11:05.551 }, 00:11:05.551 { 00:11:05.551 "name": "BaseBdev4", 00:11:05.551 "uuid": "53e4f81e-8b89-40ca-b33e-7d64f54b7d93", 00:11:05.551 "is_configured": true, 00:11:05.551 "data_offset": 0, 00:11:05.551 "data_size": 65536 00:11:05.551 } 00:11:05.551 ] 00:11:05.551 }' 00:11:05.551 20:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:05.551 20:06:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.120 20:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:06.120 20:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:06.120 20:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:06.120 20:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:06.120 20:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:06.120 20:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:06.120 20:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:06.120 20:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:06.120 20:06:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.120 20:06:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.120 [2024-12-08 20:06:37.819909] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:06.120 20:06:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.120 20:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:06.120 "name": "Existed_Raid", 00:11:06.120 "aliases": [ 00:11:06.120 "bdf93d48-bd52-4a0d-ac9b-801ac9fa3568" 00:11:06.120 ], 00:11:06.120 "product_name": "Raid Volume", 00:11:06.120 "block_size": 512, 00:11:06.120 "num_blocks": 65536, 00:11:06.120 "uuid": "bdf93d48-bd52-4a0d-ac9b-801ac9fa3568", 00:11:06.121 "assigned_rate_limits": { 00:11:06.121 "rw_ios_per_sec": 0, 00:11:06.121 "rw_mbytes_per_sec": 0, 00:11:06.121 "r_mbytes_per_sec": 0, 00:11:06.121 "w_mbytes_per_sec": 0 00:11:06.121 }, 00:11:06.121 "claimed": false, 00:11:06.121 "zoned": false, 00:11:06.121 "supported_io_types": { 00:11:06.121 "read": true, 00:11:06.121 "write": true, 00:11:06.121 "unmap": false, 00:11:06.121 "flush": false, 00:11:06.121 "reset": true, 00:11:06.121 "nvme_admin": false, 00:11:06.121 "nvme_io": false, 00:11:06.121 "nvme_io_md": false, 00:11:06.121 "write_zeroes": true, 00:11:06.121 "zcopy": false, 00:11:06.121 "get_zone_info": false, 00:11:06.121 "zone_management": false, 00:11:06.121 "zone_append": false, 00:11:06.121 "compare": false, 00:11:06.121 "compare_and_write": false, 00:11:06.121 "abort": false, 00:11:06.121 "seek_hole": false, 00:11:06.121 "seek_data": false, 00:11:06.121 "copy": false, 00:11:06.121 "nvme_iov_md": false 00:11:06.121 }, 00:11:06.121 "memory_domains": [ 00:11:06.121 { 00:11:06.121 "dma_device_id": "system", 00:11:06.121 "dma_device_type": 1 00:11:06.121 }, 00:11:06.121 { 00:11:06.121 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:06.121 "dma_device_type": 2 00:11:06.121 }, 00:11:06.121 { 00:11:06.121 "dma_device_id": "system", 00:11:06.121 "dma_device_type": 1 00:11:06.121 }, 00:11:06.121 { 00:11:06.121 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:06.121 "dma_device_type": 2 00:11:06.121 }, 00:11:06.121 { 00:11:06.121 "dma_device_id": "system", 00:11:06.121 "dma_device_type": 1 00:11:06.121 }, 00:11:06.121 { 00:11:06.121 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:06.121 "dma_device_type": 2 00:11:06.121 }, 00:11:06.121 { 00:11:06.121 "dma_device_id": "system", 00:11:06.121 "dma_device_type": 1 00:11:06.121 }, 00:11:06.121 { 00:11:06.121 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:06.121 "dma_device_type": 2 00:11:06.121 } 00:11:06.121 ], 00:11:06.121 "driver_specific": { 00:11:06.121 "raid": { 00:11:06.121 "uuid": "bdf93d48-bd52-4a0d-ac9b-801ac9fa3568", 00:11:06.121 "strip_size_kb": 0, 00:11:06.121 "state": "online", 00:11:06.121 "raid_level": "raid1", 00:11:06.121 "superblock": false, 00:11:06.121 "num_base_bdevs": 4, 00:11:06.121 "num_base_bdevs_discovered": 4, 00:11:06.121 "num_base_bdevs_operational": 4, 00:11:06.121 "base_bdevs_list": [ 00:11:06.121 { 00:11:06.121 "name": "BaseBdev1", 00:11:06.121 "uuid": "b118180b-dac8-43be-b22d-8ba7bda3901f", 00:11:06.121 "is_configured": true, 00:11:06.121 "data_offset": 0, 00:11:06.121 "data_size": 65536 00:11:06.121 }, 00:11:06.121 { 00:11:06.121 "name": "BaseBdev2", 00:11:06.121 "uuid": "52230d87-d087-4b40-9885-e315c7561ce5", 00:11:06.121 "is_configured": true, 00:11:06.121 "data_offset": 0, 00:11:06.121 "data_size": 65536 00:11:06.121 }, 00:11:06.121 { 00:11:06.121 "name": "BaseBdev3", 00:11:06.121 "uuid": "3b1cec95-dabe-4da9-bf06-3c7cbfb8b666", 00:11:06.121 "is_configured": true, 00:11:06.121 "data_offset": 0, 00:11:06.121 "data_size": 65536 00:11:06.121 }, 00:11:06.121 { 00:11:06.121 "name": "BaseBdev4", 00:11:06.121 "uuid": "53e4f81e-8b89-40ca-b33e-7d64f54b7d93", 00:11:06.121 "is_configured": true, 00:11:06.121 "data_offset": 0, 00:11:06.121 "data_size": 65536 00:11:06.121 } 00:11:06.121 ] 00:11:06.121 } 00:11:06.121 } 00:11:06.121 }' 00:11:06.121 20:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:06.121 20:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:06.121 BaseBdev2 00:11:06.121 BaseBdev3 00:11:06.121 BaseBdev4' 00:11:06.121 20:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:06.121 20:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:06.121 20:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:06.121 20:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:06.121 20:06:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.121 20:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:06.121 20:06:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.121 20:06:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.121 20:06:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:06.121 20:06:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:06.121 20:06:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:06.121 20:06:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:06.121 20:06:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:06.121 20:06:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.121 20:06:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.121 20:06:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.121 20:06:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:06.121 20:06:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:06.121 20:06:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:06.121 20:06:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:06.121 20:06:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:06.121 20:06:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.121 20:06:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.121 20:06:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.382 20:06:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:06.382 20:06:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:06.382 20:06:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:06.382 20:06:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:06.382 20:06:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:06.382 20:06:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.382 20:06:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.382 20:06:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.382 20:06:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:06.382 20:06:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:06.382 20:06:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:06.382 20:06:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.382 20:06:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.382 [2024-12-08 20:06:38.159184] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:06.382 20:06:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.382 20:06:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:06.382 20:06:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:11:06.382 20:06:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:06.382 20:06:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:06.382 20:06:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:11:06.382 20:06:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:11:06.382 20:06:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:06.382 20:06:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:06.382 20:06:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:06.382 20:06:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:06.382 20:06:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:06.382 20:06:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:06.382 20:06:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:06.382 20:06:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:06.382 20:06:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:06.382 20:06:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:06.382 20:06:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:06.382 20:06:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.382 20:06:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.382 20:06:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.382 20:06:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:06.382 "name": "Existed_Raid", 00:11:06.382 "uuid": "bdf93d48-bd52-4a0d-ac9b-801ac9fa3568", 00:11:06.382 "strip_size_kb": 0, 00:11:06.382 "state": "online", 00:11:06.382 "raid_level": "raid1", 00:11:06.382 "superblock": false, 00:11:06.382 "num_base_bdevs": 4, 00:11:06.382 "num_base_bdevs_discovered": 3, 00:11:06.382 "num_base_bdevs_operational": 3, 00:11:06.382 "base_bdevs_list": [ 00:11:06.382 { 00:11:06.382 "name": null, 00:11:06.382 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:06.382 "is_configured": false, 00:11:06.382 "data_offset": 0, 00:11:06.382 "data_size": 65536 00:11:06.382 }, 00:11:06.382 { 00:11:06.382 "name": "BaseBdev2", 00:11:06.382 "uuid": "52230d87-d087-4b40-9885-e315c7561ce5", 00:11:06.382 "is_configured": true, 00:11:06.382 "data_offset": 0, 00:11:06.382 "data_size": 65536 00:11:06.382 }, 00:11:06.382 { 00:11:06.382 "name": "BaseBdev3", 00:11:06.383 "uuid": "3b1cec95-dabe-4da9-bf06-3c7cbfb8b666", 00:11:06.383 "is_configured": true, 00:11:06.383 "data_offset": 0, 00:11:06.383 "data_size": 65536 00:11:06.383 }, 00:11:06.383 { 00:11:06.383 "name": "BaseBdev4", 00:11:06.383 "uuid": "53e4f81e-8b89-40ca-b33e-7d64f54b7d93", 00:11:06.383 "is_configured": true, 00:11:06.383 "data_offset": 0, 00:11:06.383 "data_size": 65536 00:11:06.383 } 00:11:06.383 ] 00:11:06.383 }' 00:11:06.383 20:06:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:06.383 20:06:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.952 20:06:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:06.952 20:06:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:06.952 20:06:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:06.952 20:06:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:06.952 20:06:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.952 20:06:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.952 20:06:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.952 20:06:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:06.952 20:06:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:06.953 20:06:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:06.953 20:06:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.953 20:06:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.953 [2024-12-08 20:06:38.729977] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:06.953 20:06:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.953 20:06:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:06.953 20:06:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:06.953 20:06:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:06.953 20:06:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:06.953 20:06:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.953 20:06:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.953 20:06:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.953 20:06:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:06.953 20:06:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:06.953 20:06:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:06.953 20:06:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.953 20:06:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.953 [2024-12-08 20:06:38.877072] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:07.212 20:06:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.212 20:06:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:07.212 20:06:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:07.212 20:06:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:07.212 20:06:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:07.212 20:06:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.212 20:06:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.212 20:06:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.212 20:06:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:07.212 20:06:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:07.212 20:06:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:11:07.212 20:06:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.212 20:06:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.212 [2024-12-08 20:06:39.025331] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:07.212 [2024-12-08 20:06:39.025492] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:07.212 [2024-12-08 20:06:39.117732] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:07.212 [2024-12-08 20:06:39.117859] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:07.212 [2024-12-08 20:06:39.117902] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:07.213 20:06:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.213 20:06:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:07.213 20:06:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:07.213 20:06:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:07.213 20:06:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:07.213 20:06:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.213 20:06:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.213 20:06:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.213 20:06:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:07.213 20:06:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:07.213 20:06:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:11:07.213 20:06:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:07.213 20:06:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:07.213 20:06:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:07.213 20:06:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.213 20:06:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.473 BaseBdev2 00:11:07.473 20:06:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.473 20:06:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:07.473 20:06:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:07.473 20:06:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:07.473 20:06:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:07.473 20:06:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:07.473 20:06:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:07.473 20:06:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:07.473 20:06:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.473 20:06:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.473 20:06:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.473 20:06:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:07.473 20:06:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.473 20:06:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.473 [ 00:11:07.473 { 00:11:07.473 "name": "BaseBdev2", 00:11:07.473 "aliases": [ 00:11:07.473 "90657ae9-74d3-4e65-bd9e-20410a7d4a79" 00:11:07.473 ], 00:11:07.473 "product_name": "Malloc disk", 00:11:07.473 "block_size": 512, 00:11:07.473 "num_blocks": 65536, 00:11:07.473 "uuid": "90657ae9-74d3-4e65-bd9e-20410a7d4a79", 00:11:07.473 "assigned_rate_limits": { 00:11:07.473 "rw_ios_per_sec": 0, 00:11:07.473 "rw_mbytes_per_sec": 0, 00:11:07.473 "r_mbytes_per_sec": 0, 00:11:07.473 "w_mbytes_per_sec": 0 00:11:07.473 }, 00:11:07.473 "claimed": false, 00:11:07.473 "zoned": false, 00:11:07.473 "supported_io_types": { 00:11:07.473 "read": true, 00:11:07.473 "write": true, 00:11:07.473 "unmap": true, 00:11:07.473 "flush": true, 00:11:07.473 "reset": true, 00:11:07.473 "nvme_admin": false, 00:11:07.473 "nvme_io": false, 00:11:07.473 "nvme_io_md": false, 00:11:07.473 "write_zeroes": true, 00:11:07.473 "zcopy": true, 00:11:07.473 "get_zone_info": false, 00:11:07.473 "zone_management": false, 00:11:07.473 "zone_append": false, 00:11:07.473 "compare": false, 00:11:07.473 "compare_and_write": false, 00:11:07.473 "abort": true, 00:11:07.473 "seek_hole": false, 00:11:07.473 "seek_data": false, 00:11:07.473 "copy": true, 00:11:07.473 "nvme_iov_md": false 00:11:07.473 }, 00:11:07.473 "memory_domains": [ 00:11:07.473 { 00:11:07.473 "dma_device_id": "system", 00:11:07.473 "dma_device_type": 1 00:11:07.473 }, 00:11:07.473 { 00:11:07.473 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:07.474 "dma_device_type": 2 00:11:07.474 } 00:11:07.474 ], 00:11:07.474 "driver_specific": {} 00:11:07.474 } 00:11:07.474 ] 00:11:07.474 20:06:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.474 20:06:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:07.474 20:06:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:07.474 20:06:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:07.474 20:06:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:07.474 20:06:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.474 20:06:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.474 BaseBdev3 00:11:07.474 20:06:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.474 20:06:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:07.474 20:06:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:07.474 20:06:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:07.474 20:06:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:07.474 20:06:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:07.474 20:06:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:07.474 20:06:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:07.474 20:06:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.474 20:06:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.474 20:06:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.474 20:06:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:07.474 20:06:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.474 20:06:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.474 [ 00:11:07.474 { 00:11:07.474 "name": "BaseBdev3", 00:11:07.474 "aliases": [ 00:11:07.474 "51b909c0-75c9-4772-9edd-0399b98fda8e" 00:11:07.474 ], 00:11:07.474 "product_name": "Malloc disk", 00:11:07.474 "block_size": 512, 00:11:07.474 "num_blocks": 65536, 00:11:07.474 "uuid": "51b909c0-75c9-4772-9edd-0399b98fda8e", 00:11:07.474 "assigned_rate_limits": { 00:11:07.474 "rw_ios_per_sec": 0, 00:11:07.474 "rw_mbytes_per_sec": 0, 00:11:07.474 "r_mbytes_per_sec": 0, 00:11:07.474 "w_mbytes_per_sec": 0 00:11:07.474 }, 00:11:07.474 "claimed": false, 00:11:07.474 "zoned": false, 00:11:07.474 "supported_io_types": { 00:11:07.474 "read": true, 00:11:07.474 "write": true, 00:11:07.474 "unmap": true, 00:11:07.474 "flush": true, 00:11:07.474 "reset": true, 00:11:07.474 "nvme_admin": false, 00:11:07.474 "nvme_io": false, 00:11:07.474 "nvme_io_md": false, 00:11:07.474 "write_zeroes": true, 00:11:07.474 "zcopy": true, 00:11:07.474 "get_zone_info": false, 00:11:07.474 "zone_management": false, 00:11:07.474 "zone_append": false, 00:11:07.474 "compare": false, 00:11:07.474 "compare_and_write": false, 00:11:07.474 "abort": true, 00:11:07.474 "seek_hole": false, 00:11:07.474 "seek_data": false, 00:11:07.474 "copy": true, 00:11:07.474 "nvme_iov_md": false 00:11:07.474 }, 00:11:07.474 "memory_domains": [ 00:11:07.474 { 00:11:07.474 "dma_device_id": "system", 00:11:07.474 "dma_device_type": 1 00:11:07.474 }, 00:11:07.474 { 00:11:07.474 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:07.474 "dma_device_type": 2 00:11:07.474 } 00:11:07.474 ], 00:11:07.474 "driver_specific": {} 00:11:07.474 } 00:11:07.474 ] 00:11:07.474 20:06:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.474 20:06:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:07.474 20:06:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:07.474 20:06:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:07.474 20:06:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:07.474 20:06:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.474 20:06:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.474 BaseBdev4 00:11:07.474 20:06:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.474 20:06:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:11:07.474 20:06:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:07.474 20:06:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:07.474 20:06:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:07.474 20:06:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:07.474 20:06:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:07.474 20:06:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:07.474 20:06:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.474 20:06:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.474 20:06:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.474 20:06:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:07.474 20:06:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.474 20:06:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.474 [ 00:11:07.474 { 00:11:07.474 "name": "BaseBdev4", 00:11:07.474 "aliases": [ 00:11:07.474 "285da12b-f5e0-4a01-bd7f-686cbda5cea8" 00:11:07.474 ], 00:11:07.474 "product_name": "Malloc disk", 00:11:07.474 "block_size": 512, 00:11:07.474 "num_blocks": 65536, 00:11:07.474 "uuid": "285da12b-f5e0-4a01-bd7f-686cbda5cea8", 00:11:07.474 "assigned_rate_limits": { 00:11:07.474 "rw_ios_per_sec": 0, 00:11:07.474 "rw_mbytes_per_sec": 0, 00:11:07.474 "r_mbytes_per_sec": 0, 00:11:07.474 "w_mbytes_per_sec": 0 00:11:07.474 }, 00:11:07.474 "claimed": false, 00:11:07.474 "zoned": false, 00:11:07.474 "supported_io_types": { 00:11:07.474 "read": true, 00:11:07.474 "write": true, 00:11:07.474 "unmap": true, 00:11:07.474 "flush": true, 00:11:07.474 "reset": true, 00:11:07.474 "nvme_admin": false, 00:11:07.474 "nvme_io": false, 00:11:07.474 "nvme_io_md": false, 00:11:07.474 "write_zeroes": true, 00:11:07.474 "zcopy": true, 00:11:07.474 "get_zone_info": false, 00:11:07.474 "zone_management": false, 00:11:07.474 "zone_append": false, 00:11:07.474 "compare": false, 00:11:07.474 "compare_and_write": false, 00:11:07.474 "abort": true, 00:11:07.474 "seek_hole": false, 00:11:07.474 "seek_data": false, 00:11:07.474 "copy": true, 00:11:07.474 "nvme_iov_md": false 00:11:07.474 }, 00:11:07.474 "memory_domains": [ 00:11:07.474 { 00:11:07.474 "dma_device_id": "system", 00:11:07.474 "dma_device_type": 1 00:11:07.474 }, 00:11:07.474 { 00:11:07.474 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:07.474 "dma_device_type": 2 00:11:07.474 } 00:11:07.474 ], 00:11:07.474 "driver_specific": {} 00:11:07.474 } 00:11:07.474 ] 00:11:07.474 20:06:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.474 20:06:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:07.474 20:06:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:07.474 20:06:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:07.474 20:06:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:07.474 20:06:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.474 20:06:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.474 [2024-12-08 20:06:39.410409] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:07.474 [2024-12-08 20:06:39.410456] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:07.474 [2024-12-08 20:06:39.410475] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:07.474 [2024-12-08 20:06:39.412378] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:07.474 [2024-12-08 20:06:39.412426] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:07.474 20:06:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.474 20:06:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:07.474 20:06:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:07.474 20:06:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:07.474 20:06:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:07.474 20:06:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:07.474 20:06:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:07.474 20:06:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:07.474 20:06:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:07.474 20:06:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:07.474 20:06:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:07.474 20:06:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:07.474 20:06:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.474 20:06:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:07.475 20:06:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.475 20:06:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.733 20:06:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:07.733 "name": "Existed_Raid", 00:11:07.733 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:07.733 "strip_size_kb": 0, 00:11:07.733 "state": "configuring", 00:11:07.733 "raid_level": "raid1", 00:11:07.733 "superblock": false, 00:11:07.733 "num_base_bdevs": 4, 00:11:07.733 "num_base_bdevs_discovered": 3, 00:11:07.733 "num_base_bdevs_operational": 4, 00:11:07.733 "base_bdevs_list": [ 00:11:07.733 { 00:11:07.733 "name": "BaseBdev1", 00:11:07.733 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:07.733 "is_configured": false, 00:11:07.733 "data_offset": 0, 00:11:07.733 "data_size": 0 00:11:07.733 }, 00:11:07.733 { 00:11:07.733 "name": "BaseBdev2", 00:11:07.733 "uuid": "90657ae9-74d3-4e65-bd9e-20410a7d4a79", 00:11:07.733 "is_configured": true, 00:11:07.734 "data_offset": 0, 00:11:07.734 "data_size": 65536 00:11:07.734 }, 00:11:07.734 { 00:11:07.734 "name": "BaseBdev3", 00:11:07.734 "uuid": "51b909c0-75c9-4772-9edd-0399b98fda8e", 00:11:07.734 "is_configured": true, 00:11:07.734 "data_offset": 0, 00:11:07.734 "data_size": 65536 00:11:07.734 }, 00:11:07.734 { 00:11:07.734 "name": "BaseBdev4", 00:11:07.734 "uuid": "285da12b-f5e0-4a01-bd7f-686cbda5cea8", 00:11:07.734 "is_configured": true, 00:11:07.734 "data_offset": 0, 00:11:07.734 "data_size": 65536 00:11:07.734 } 00:11:07.734 ] 00:11:07.734 }' 00:11:07.734 20:06:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:07.734 20:06:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.994 20:06:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:07.994 20:06:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.994 20:06:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.994 [2024-12-08 20:06:39.805721] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:07.994 20:06:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.994 20:06:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:07.994 20:06:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:07.994 20:06:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:07.994 20:06:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:07.994 20:06:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:07.994 20:06:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:07.994 20:06:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:07.994 20:06:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:07.994 20:06:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:07.994 20:06:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:07.994 20:06:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:07.994 20:06:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.994 20:06:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.994 20:06:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:07.994 20:06:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.994 20:06:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:07.994 "name": "Existed_Raid", 00:11:07.994 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:07.994 "strip_size_kb": 0, 00:11:07.994 "state": "configuring", 00:11:07.994 "raid_level": "raid1", 00:11:07.994 "superblock": false, 00:11:07.994 "num_base_bdevs": 4, 00:11:07.994 "num_base_bdevs_discovered": 2, 00:11:07.994 "num_base_bdevs_operational": 4, 00:11:07.994 "base_bdevs_list": [ 00:11:07.994 { 00:11:07.994 "name": "BaseBdev1", 00:11:07.994 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:07.994 "is_configured": false, 00:11:07.994 "data_offset": 0, 00:11:07.994 "data_size": 0 00:11:07.994 }, 00:11:07.994 { 00:11:07.994 "name": null, 00:11:07.994 "uuid": "90657ae9-74d3-4e65-bd9e-20410a7d4a79", 00:11:07.994 "is_configured": false, 00:11:07.994 "data_offset": 0, 00:11:07.994 "data_size": 65536 00:11:07.994 }, 00:11:07.994 { 00:11:07.994 "name": "BaseBdev3", 00:11:07.994 "uuid": "51b909c0-75c9-4772-9edd-0399b98fda8e", 00:11:07.994 "is_configured": true, 00:11:07.994 "data_offset": 0, 00:11:07.994 "data_size": 65536 00:11:07.994 }, 00:11:07.994 { 00:11:07.994 "name": "BaseBdev4", 00:11:07.994 "uuid": "285da12b-f5e0-4a01-bd7f-686cbda5cea8", 00:11:07.994 "is_configured": true, 00:11:07.994 "data_offset": 0, 00:11:07.994 "data_size": 65536 00:11:07.994 } 00:11:07.994 ] 00:11:07.994 }' 00:11:07.994 20:06:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:07.994 20:06:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.259 20:06:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.259 20:06:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:08.259 20:06:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.259 20:06:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.259 20:06:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.518 20:06:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:08.518 20:06:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:08.518 20:06:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.518 20:06:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.518 [2024-12-08 20:06:40.292896] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:08.518 BaseBdev1 00:11:08.518 20:06:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.518 20:06:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:08.518 20:06:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:08.518 20:06:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:08.518 20:06:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:08.518 20:06:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:08.518 20:06:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:08.518 20:06:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:08.518 20:06:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.518 20:06:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.518 20:06:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.518 20:06:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:08.518 20:06:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.518 20:06:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.518 [ 00:11:08.518 { 00:11:08.518 "name": "BaseBdev1", 00:11:08.518 "aliases": [ 00:11:08.518 "276c03be-282f-4774-9bf5-19301d9b7d86" 00:11:08.518 ], 00:11:08.518 "product_name": "Malloc disk", 00:11:08.518 "block_size": 512, 00:11:08.518 "num_blocks": 65536, 00:11:08.518 "uuid": "276c03be-282f-4774-9bf5-19301d9b7d86", 00:11:08.518 "assigned_rate_limits": { 00:11:08.518 "rw_ios_per_sec": 0, 00:11:08.518 "rw_mbytes_per_sec": 0, 00:11:08.518 "r_mbytes_per_sec": 0, 00:11:08.518 "w_mbytes_per_sec": 0 00:11:08.518 }, 00:11:08.518 "claimed": true, 00:11:08.518 "claim_type": "exclusive_write", 00:11:08.518 "zoned": false, 00:11:08.518 "supported_io_types": { 00:11:08.518 "read": true, 00:11:08.518 "write": true, 00:11:08.518 "unmap": true, 00:11:08.518 "flush": true, 00:11:08.518 "reset": true, 00:11:08.518 "nvme_admin": false, 00:11:08.518 "nvme_io": false, 00:11:08.518 "nvme_io_md": false, 00:11:08.518 "write_zeroes": true, 00:11:08.518 "zcopy": true, 00:11:08.518 "get_zone_info": false, 00:11:08.518 "zone_management": false, 00:11:08.518 "zone_append": false, 00:11:08.518 "compare": false, 00:11:08.518 "compare_and_write": false, 00:11:08.518 "abort": true, 00:11:08.518 "seek_hole": false, 00:11:08.518 "seek_data": false, 00:11:08.518 "copy": true, 00:11:08.518 "nvme_iov_md": false 00:11:08.518 }, 00:11:08.518 "memory_domains": [ 00:11:08.518 { 00:11:08.518 "dma_device_id": "system", 00:11:08.518 "dma_device_type": 1 00:11:08.518 }, 00:11:08.518 { 00:11:08.518 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:08.518 "dma_device_type": 2 00:11:08.518 } 00:11:08.518 ], 00:11:08.518 "driver_specific": {} 00:11:08.518 } 00:11:08.518 ] 00:11:08.518 20:06:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.518 20:06:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:08.518 20:06:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:08.518 20:06:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:08.518 20:06:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:08.518 20:06:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:08.518 20:06:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:08.518 20:06:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:08.518 20:06:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:08.518 20:06:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:08.518 20:06:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:08.518 20:06:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:08.518 20:06:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:08.518 20:06:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.518 20:06:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.518 20:06:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.518 20:06:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.518 20:06:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:08.518 "name": "Existed_Raid", 00:11:08.518 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:08.518 "strip_size_kb": 0, 00:11:08.518 "state": "configuring", 00:11:08.518 "raid_level": "raid1", 00:11:08.518 "superblock": false, 00:11:08.518 "num_base_bdevs": 4, 00:11:08.518 "num_base_bdevs_discovered": 3, 00:11:08.518 "num_base_bdevs_operational": 4, 00:11:08.518 "base_bdevs_list": [ 00:11:08.518 { 00:11:08.518 "name": "BaseBdev1", 00:11:08.518 "uuid": "276c03be-282f-4774-9bf5-19301d9b7d86", 00:11:08.518 "is_configured": true, 00:11:08.518 "data_offset": 0, 00:11:08.518 "data_size": 65536 00:11:08.518 }, 00:11:08.518 { 00:11:08.518 "name": null, 00:11:08.518 "uuid": "90657ae9-74d3-4e65-bd9e-20410a7d4a79", 00:11:08.518 "is_configured": false, 00:11:08.518 "data_offset": 0, 00:11:08.518 "data_size": 65536 00:11:08.518 }, 00:11:08.518 { 00:11:08.518 "name": "BaseBdev3", 00:11:08.518 "uuid": "51b909c0-75c9-4772-9edd-0399b98fda8e", 00:11:08.518 "is_configured": true, 00:11:08.518 "data_offset": 0, 00:11:08.518 "data_size": 65536 00:11:08.518 }, 00:11:08.518 { 00:11:08.518 "name": "BaseBdev4", 00:11:08.518 "uuid": "285da12b-f5e0-4a01-bd7f-686cbda5cea8", 00:11:08.518 "is_configured": true, 00:11:08.518 "data_offset": 0, 00:11:08.518 "data_size": 65536 00:11:08.518 } 00:11:08.518 ] 00:11:08.518 }' 00:11:08.518 20:06:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:08.518 20:06:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.777 20:06:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:08.777 20:06:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.777 20:06:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.777 20:06:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.777 20:06:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.777 20:06:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:08.777 20:06:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:08.777 20:06:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.777 20:06:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.777 [2024-12-08 20:06:40.736207] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:08.777 20:06:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.777 20:06:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:08.777 20:06:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:08.777 20:06:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:08.777 20:06:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:08.777 20:06:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:08.777 20:06:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:08.777 20:06:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:08.777 20:06:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:08.777 20:06:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:08.777 20:06:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:08.777 20:06:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.777 20:06:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.777 20:06:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.777 20:06:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:09.035 20:06:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.035 20:06:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:09.035 "name": "Existed_Raid", 00:11:09.035 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:09.035 "strip_size_kb": 0, 00:11:09.035 "state": "configuring", 00:11:09.035 "raid_level": "raid1", 00:11:09.035 "superblock": false, 00:11:09.035 "num_base_bdevs": 4, 00:11:09.035 "num_base_bdevs_discovered": 2, 00:11:09.035 "num_base_bdevs_operational": 4, 00:11:09.035 "base_bdevs_list": [ 00:11:09.035 { 00:11:09.035 "name": "BaseBdev1", 00:11:09.035 "uuid": "276c03be-282f-4774-9bf5-19301d9b7d86", 00:11:09.035 "is_configured": true, 00:11:09.035 "data_offset": 0, 00:11:09.035 "data_size": 65536 00:11:09.035 }, 00:11:09.035 { 00:11:09.035 "name": null, 00:11:09.035 "uuid": "90657ae9-74d3-4e65-bd9e-20410a7d4a79", 00:11:09.035 "is_configured": false, 00:11:09.035 "data_offset": 0, 00:11:09.035 "data_size": 65536 00:11:09.035 }, 00:11:09.035 { 00:11:09.035 "name": null, 00:11:09.035 "uuid": "51b909c0-75c9-4772-9edd-0399b98fda8e", 00:11:09.035 "is_configured": false, 00:11:09.035 "data_offset": 0, 00:11:09.035 "data_size": 65536 00:11:09.035 }, 00:11:09.035 { 00:11:09.035 "name": "BaseBdev4", 00:11:09.035 "uuid": "285da12b-f5e0-4a01-bd7f-686cbda5cea8", 00:11:09.035 "is_configured": true, 00:11:09.035 "data_offset": 0, 00:11:09.035 "data_size": 65536 00:11:09.035 } 00:11:09.035 ] 00:11:09.035 }' 00:11:09.035 20:06:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:09.035 20:06:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.294 20:06:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:09.294 20:06:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:09.294 20:06:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.294 20:06:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.294 20:06:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.294 20:06:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:09.294 20:06:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:09.294 20:06:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.294 20:06:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.294 [2024-12-08 20:06:41.235375] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:09.294 20:06:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.294 20:06:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:09.294 20:06:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:09.294 20:06:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:09.294 20:06:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:09.294 20:06:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:09.294 20:06:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:09.294 20:06:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:09.294 20:06:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:09.294 20:06:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:09.294 20:06:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:09.295 20:06:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:09.295 20:06:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.295 20:06:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.295 20:06:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:09.295 20:06:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.555 20:06:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:09.555 "name": "Existed_Raid", 00:11:09.555 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:09.555 "strip_size_kb": 0, 00:11:09.555 "state": "configuring", 00:11:09.555 "raid_level": "raid1", 00:11:09.555 "superblock": false, 00:11:09.555 "num_base_bdevs": 4, 00:11:09.555 "num_base_bdevs_discovered": 3, 00:11:09.555 "num_base_bdevs_operational": 4, 00:11:09.555 "base_bdevs_list": [ 00:11:09.555 { 00:11:09.555 "name": "BaseBdev1", 00:11:09.555 "uuid": "276c03be-282f-4774-9bf5-19301d9b7d86", 00:11:09.555 "is_configured": true, 00:11:09.555 "data_offset": 0, 00:11:09.555 "data_size": 65536 00:11:09.555 }, 00:11:09.555 { 00:11:09.555 "name": null, 00:11:09.555 "uuid": "90657ae9-74d3-4e65-bd9e-20410a7d4a79", 00:11:09.555 "is_configured": false, 00:11:09.555 "data_offset": 0, 00:11:09.555 "data_size": 65536 00:11:09.555 }, 00:11:09.555 { 00:11:09.555 "name": "BaseBdev3", 00:11:09.555 "uuid": "51b909c0-75c9-4772-9edd-0399b98fda8e", 00:11:09.555 "is_configured": true, 00:11:09.555 "data_offset": 0, 00:11:09.555 "data_size": 65536 00:11:09.555 }, 00:11:09.555 { 00:11:09.555 "name": "BaseBdev4", 00:11:09.555 "uuid": "285da12b-f5e0-4a01-bd7f-686cbda5cea8", 00:11:09.555 "is_configured": true, 00:11:09.555 "data_offset": 0, 00:11:09.555 "data_size": 65536 00:11:09.555 } 00:11:09.555 ] 00:11:09.555 }' 00:11:09.555 20:06:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:09.555 20:06:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.815 20:06:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:09.815 20:06:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:09.815 20:06:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.815 20:06:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.815 20:06:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.815 20:06:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:09.815 20:06:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:09.815 20:06:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.815 20:06:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.815 [2024-12-08 20:06:41.671332] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:09.815 20:06:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.815 20:06:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:09.815 20:06:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:09.815 20:06:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:09.815 20:06:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:09.815 20:06:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:09.815 20:06:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:09.815 20:06:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:09.815 20:06:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:09.815 20:06:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:09.815 20:06:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:09.815 20:06:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:09.815 20:06:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:09.815 20:06:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.815 20:06:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.815 20:06:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.074 20:06:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:10.074 "name": "Existed_Raid", 00:11:10.075 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:10.075 "strip_size_kb": 0, 00:11:10.075 "state": "configuring", 00:11:10.075 "raid_level": "raid1", 00:11:10.075 "superblock": false, 00:11:10.075 "num_base_bdevs": 4, 00:11:10.075 "num_base_bdevs_discovered": 2, 00:11:10.075 "num_base_bdevs_operational": 4, 00:11:10.075 "base_bdevs_list": [ 00:11:10.075 { 00:11:10.075 "name": null, 00:11:10.075 "uuid": "276c03be-282f-4774-9bf5-19301d9b7d86", 00:11:10.075 "is_configured": false, 00:11:10.075 "data_offset": 0, 00:11:10.075 "data_size": 65536 00:11:10.075 }, 00:11:10.075 { 00:11:10.075 "name": null, 00:11:10.075 "uuid": "90657ae9-74d3-4e65-bd9e-20410a7d4a79", 00:11:10.075 "is_configured": false, 00:11:10.075 "data_offset": 0, 00:11:10.075 "data_size": 65536 00:11:10.075 }, 00:11:10.075 { 00:11:10.075 "name": "BaseBdev3", 00:11:10.075 "uuid": "51b909c0-75c9-4772-9edd-0399b98fda8e", 00:11:10.075 "is_configured": true, 00:11:10.075 "data_offset": 0, 00:11:10.075 "data_size": 65536 00:11:10.075 }, 00:11:10.075 { 00:11:10.075 "name": "BaseBdev4", 00:11:10.075 "uuid": "285da12b-f5e0-4a01-bd7f-686cbda5cea8", 00:11:10.075 "is_configured": true, 00:11:10.075 "data_offset": 0, 00:11:10.075 "data_size": 65536 00:11:10.075 } 00:11:10.075 ] 00:11:10.075 }' 00:11:10.075 20:06:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:10.075 20:06:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.334 20:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:10.334 20:06:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.334 20:06:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.334 20:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:10.334 20:06:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.334 20:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:10.334 20:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:10.334 20:06:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.334 20:06:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.334 [2024-12-08 20:06:42.245405] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:10.334 20:06:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.334 20:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:10.334 20:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:10.334 20:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:10.334 20:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:10.334 20:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:10.334 20:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:10.334 20:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:10.334 20:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:10.334 20:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:10.334 20:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:10.334 20:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:10.334 20:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:10.334 20:06:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.334 20:06:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.334 20:06:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.334 20:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:10.334 "name": "Existed_Raid", 00:11:10.334 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:10.334 "strip_size_kb": 0, 00:11:10.334 "state": "configuring", 00:11:10.334 "raid_level": "raid1", 00:11:10.334 "superblock": false, 00:11:10.334 "num_base_bdevs": 4, 00:11:10.334 "num_base_bdevs_discovered": 3, 00:11:10.334 "num_base_bdevs_operational": 4, 00:11:10.334 "base_bdevs_list": [ 00:11:10.334 { 00:11:10.334 "name": null, 00:11:10.334 "uuid": "276c03be-282f-4774-9bf5-19301d9b7d86", 00:11:10.334 "is_configured": false, 00:11:10.334 "data_offset": 0, 00:11:10.334 "data_size": 65536 00:11:10.334 }, 00:11:10.334 { 00:11:10.334 "name": "BaseBdev2", 00:11:10.334 "uuid": "90657ae9-74d3-4e65-bd9e-20410a7d4a79", 00:11:10.334 "is_configured": true, 00:11:10.334 "data_offset": 0, 00:11:10.334 "data_size": 65536 00:11:10.334 }, 00:11:10.334 { 00:11:10.334 "name": "BaseBdev3", 00:11:10.334 "uuid": "51b909c0-75c9-4772-9edd-0399b98fda8e", 00:11:10.334 "is_configured": true, 00:11:10.334 "data_offset": 0, 00:11:10.334 "data_size": 65536 00:11:10.334 }, 00:11:10.334 { 00:11:10.334 "name": "BaseBdev4", 00:11:10.334 "uuid": "285da12b-f5e0-4a01-bd7f-686cbda5cea8", 00:11:10.334 "is_configured": true, 00:11:10.334 "data_offset": 0, 00:11:10.334 "data_size": 65536 00:11:10.334 } 00:11:10.334 ] 00:11:10.334 }' 00:11:10.334 20:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:10.334 20:06:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.903 20:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:10.903 20:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:10.903 20:06:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.903 20:06:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.903 20:06:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.903 20:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:10.903 20:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:10.903 20:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:10.903 20:06:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.903 20:06:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.903 20:06:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.903 20:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 276c03be-282f-4774-9bf5-19301d9b7d86 00:11:10.903 20:06:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.903 20:06:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.903 [2024-12-08 20:06:42.764083] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:10.903 [2024-12-08 20:06:42.764201] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:10.903 [2024-12-08 20:06:42.764228] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:10.903 [2024-12-08 20:06:42.764567] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:10.903 [2024-12-08 20:06:42.764796] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:10.903 [2024-12-08 20:06:42.764841] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:10.903 [2024-12-08 20:06:42.765171] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:10.903 NewBaseBdev 00:11:10.903 20:06:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.903 20:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:10.903 20:06:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:11:10.903 20:06:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:10.903 20:06:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:10.903 20:06:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:10.903 20:06:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:10.904 20:06:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:10.904 20:06:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.904 20:06:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.904 20:06:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.904 20:06:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:10.904 20:06:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.904 20:06:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.904 [ 00:11:10.904 { 00:11:10.904 "name": "NewBaseBdev", 00:11:10.904 "aliases": [ 00:11:10.904 "276c03be-282f-4774-9bf5-19301d9b7d86" 00:11:10.904 ], 00:11:10.904 "product_name": "Malloc disk", 00:11:10.904 "block_size": 512, 00:11:10.904 "num_blocks": 65536, 00:11:10.904 "uuid": "276c03be-282f-4774-9bf5-19301d9b7d86", 00:11:10.904 "assigned_rate_limits": { 00:11:10.904 "rw_ios_per_sec": 0, 00:11:10.904 "rw_mbytes_per_sec": 0, 00:11:10.904 "r_mbytes_per_sec": 0, 00:11:10.904 "w_mbytes_per_sec": 0 00:11:10.904 }, 00:11:10.904 "claimed": true, 00:11:10.904 "claim_type": "exclusive_write", 00:11:10.904 "zoned": false, 00:11:10.904 "supported_io_types": { 00:11:10.904 "read": true, 00:11:10.904 "write": true, 00:11:10.904 "unmap": true, 00:11:10.904 "flush": true, 00:11:10.904 "reset": true, 00:11:10.904 "nvme_admin": false, 00:11:10.904 "nvme_io": false, 00:11:10.904 "nvme_io_md": false, 00:11:10.904 "write_zeroes": true, 00:11:10.904 "zcopy": true, 00:11:10.904 "get_zone_info": false, 00:11:10.904 "zone_management": false, 00:11:10.904 "zone_append": false, 00:11:10.904 "compare": false, 00:11:10.904 "compare_and_write": false, 00:11:10.904 "abort": true, 00:11:10.904 "seek_hole": false, 00:11:10.904 "seek_data": false, 00:11:10.904 "copy": true, 00:11:10.904 "nvme_iov_md": false 00:11:10.904 }, 00:11:10.904 "memory_domains": [ 00:11:10.904 { 00:11:10.904 "dma_device_id": "system", 00:11:10.904 "dma_device_type": 1 00:11:10.904 }, 00:11:10.904 { 00:11:10.904 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:10.904 "dma_device_type": 2 00:11:10.904 } 00:11:10.904 ], 00:11:10.904 "driver_specific": {} 00:11:10.904 } 00:11:10.904 ] 00:11:10.904 20:06:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.904 20:06:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:10.904 20:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:11:10.904 20:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:10.904 20:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:10.904 20:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:10.904 20:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:10.904 20:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:10.904 20:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:10.904 20:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:10.904 20:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:10.904 20:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:10.904 20:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:10.904 20:06:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.904 20:06:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.904 20:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:10.904 20:06:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.904 20:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:10.904 "name": "Existed_Raid", 00:11:10.904 "uuid": "38ae43a9-877e-4858-8ffb-141fd01ebec8", 00:11:10.904 "strip_size_kb": 0, 00:11:10.904 "state": "online", 00:11:10.904 "raid_level": "raid1", 00:11:10.904 "superblock": false, 00:11:10.904 "num_base_bdevs": 4, 00:11:10.904 "num_base_bdevs_discovered": 4, 00:11:10.904 "num_base_bdevs_operational": 4, 00:11:10.904 "base_bdevs_list": [ 00:11:10.904 { 00:11:10.904 "name": "NewBaseBdev", 00:11:10.904 "uuid": "276c03be-282f-4774-9bf5-19301d9b7d86", 00:11:10.904 "is_configured": true, 00:11:10.904 "data_offset": 0, 00:11:10.904 "data_size": 65536 00:11:10.904 }, 00:11:10.904 { 00:11:10.904 "name": "BaseBdev2", 00:11:10.904 "uuid": "90657ae9-74d3-4e65-bd9e-20410a7d4a79", 00:11:10.904 "is_configured": true, 00:11:10.904 "data_offset": 0, 00:11:10.904 "data_size": 65536 00:11:10.904 }, 00:11:10.904 { 00:11:10.904 "name": "BaseBdev3", 00:11:10.904 "uuid": "51b909c0-75c9-4772-9edd-0399b98fda8e", 00:11:10.904 "is_configured": true, 00:11:10.904 "data_offset": 0, 00:11:10.904 "data_size": 65536 00:11:10.904 }, 00:11:10.904 { 00:11:10.904 "name": "BaseBdev4", 00:11:10.904 "uuid": "285da12b-f5e0-4a01-bd7f-686cbda5cea8", 00:11:10.904 "is_configured": true, 00:11:10.904 "data_offset": 0, 00:11:10.904 "data_size": 65536 00:11:10.904 } 00:11:10.904 ] 00:11:10.904 }' 00:11:10.904 20:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:10.904 20:06:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.473 20:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:11.473 20:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:11.473 20:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:11.473 20:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:11.473 20:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:11.473 20:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:11.473 20:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:11.473 20:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:11.473 20:06:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.473 20:06:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.473 [2024-12-08 20:06:43.171791] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:11.473 20:06:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.473 20:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:11.473 "name": "Existed_Raid", 00:11:11.473 "aliases": [ 00:11:11.473 "38ae43a9-877e-4858-8ffb-141fd01ebec8" 00:11:11.473 ], 00:11:11.473 "product_name": "Raid Volume", 00:11:11.473 "block_size": 512, 00:11:11.473 "num_blocks": 65536, 00:11:11.473 "uuid": "38ae43a9-877e-4858-8ffb-141fd01ebec8", 00:11:11.473 "assigned_rate_limits": { 00:11:11.473 "rw_ios_per_sec": 0, 00:11:11.473 "rw_mbytes_per_sec": 0, 00:11:11.473 "r_mbytes_per_sec": 0, 00:11:11.473 "w_mbytes_per_sec": 0 00:11:11.473 }, 00:11:11.473 "claimed": false, 00:11:11.473 "zoned": false, 00:11:11.473 "supported_io_types": { 00:11:11.473 "read": true, 00:11:11.473 "write": true, 00:11:11.473 "unmap": false, 00:11:11.473 "flush": false, 00:11:11.473 "reset": true, 00:11:11.473 "nvme_admin": false, 00:11:11.473 "nvme_io": false, 00:11:11.473 "nvme_io_md": false, 00:11:11.473 "write_zeroes": true, 00:11:11.473 "zcopy": false, 00:11:11.473 "get_zone_info": false, 00:11:11.473 "zone_management": false, 00:11:11.473 "zone_append": false, 00:11:11.473 "compare": false, 00:11:11.473 "compare_and_write": false, 00:11:11.473 "abort": false, 00:11:11.473 "seek_hole": false, 00:11:11.473 "seek_data": false, 00:11:11.473 "copy": false, 00:11:11.473 "nvme_iov_md": false 00:11:11.473 }, 00:11:11.473 "memory_domains": [ 00:11:11.473 { 00:11:11.473 "dma_device_id": "system", 00:11:11.473 "dma_device_type": 1 00:11:11.473 }, 00:11:11.473 { 00:11:11.473 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:11.473 "dma_device_type": 2 00:11:11.473 }, 00:11:11.473 { 00:11:11.473 "dma_device_id": "system", 00:11:11.473 "dma_device_type": 1 00:11:11.473 }, 00:11:11.473 { 00:11:11.473 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:11.473 "dma_device_type": 2 00:11:11.473 }, 00:11:11.473 { 00:11:11.473 "dma_device_id": "system", 00:11:11.473 "dma_device_type": 1 00:11:11.473 }, 00:11:11.473 { 00:11:11.473 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:11.473 "dma_device_type": 2 00:11:11.473 }, 00:11:11.473 { 00:11:11.473 "dma_device_id": "system", 00:11:11.473 "dma_device_type": 1 00:11:11.473 }, 00:11:11.473 { 00:11:11.473 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:11.473 "dma_device_type": 2 00:11:11.473 } 00:11:11.473 ], 00:11:11.473 "driver_specific": { 00:11:11.473 "raid": { 00:11:11.473 "uuid": "38ae43a9-877e-4858-8ffb-141fd01ebec8", 00:11:11.473 "strip_size_kb": 0, 00:11:11.473 "state": "online", 00:11:11.473 "raid_level": "raid1", 00:11:11.473 "superblock": false, 00:11:11.473 "num_base_bdevs": 4, 00:11:11.473 "num_base_bdevs_discovered": 4, 00:11:11.473 "num_base_bdevs_operational": 4, 00:11:11.473 "base_bdevs_list": [ 00:11:11.473 { 00:11:11.473 "name": "NewBaseBdev", 00:11:11.473 "uuid": "276c03be-282f-4774-9bf5-19301d9b7d86", 00:11:11.473 "is_configured": true, 00:11:11.473 "data_offset": 0, 00:11:11.473 "data_size": 65536 00:11:11.473 }, 00:11:11.473 { 00:11:11.473 "name": "BaseBdev2", 00:11:11.473 "uuid": "90657ae9-74d3-4e65-bd9e-20410a7d4a79", 00:11:11.473 "is_configured": true, 00:11:11.473 "data_offset": 0, 00:11:11.473 "data_size": 65536 00:11:11.473 }, 00:11:11.473 { 00:11:11.473 "name": "BaseBdev3", 00:11:11.473 "uuid": "51b909c0-75c9-4772-9edd-0399b98fda8e", 00:11:11.473 "is_configured": true, 00:11:11.473 "data_offset": 0, 00:11:11.473 "data_size": 65536 00:11:11.473 }, 00:11:11.473 { 00:11:11.473 "name": "BaseBdev4", 00:11:11.473 "uuid": "285da12b-f5e0-4a01-bd7f-686cbda5cea8", 00:11:11.473 "is_configured": true, 00:11:11.473 "data_offset": 0, 00:11:11.473 "data_size": 65536 00:11:11.473 } 00:11:11.473 ] 00:11:11.473 } 00:11:11.473 } 00:11:11.473 }' 00:11:11.473 20:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:11.473 20:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:11.473 BaseBdev2 00:11:11.473 BaseBdev3 00:11:11.473 BaseBdev4' 00:11:11.473 20:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:11.473 20:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:11.473 20:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:11.473 20:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:11.473 20:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:11.473 20:06:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.473 20:06:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.473 20:06:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.473 20:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:11.473 20:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:11.473 20:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:11.473 20:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:11.473 20:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:11.473 20:06:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.473 20:06:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.473 20:06:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.473 20:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:11.473 20:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:11.473 20:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:11.473 20:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:11.473 20:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:11.473 20:06:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.473 20:06:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.473 20:06:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.473 20:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:11.473 20:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:11.473 20:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:11.473 20:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:11.473 20:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:11.473 20:06:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.473 20:06:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.473 20:06:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.473 20:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:11.473 20:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:11.474 20:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:11.474 20:06:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.474 20:06:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.474 [2024-12-08 20:06:43.407268] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:11.474 [2024-12-08 20:06:43.407332] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:11.474 [2024-12-08 20:06:43.407426] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:11.474 [2024-12-08 20:06:43.407774] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:11.474 [2024-12-08 20:06:43.407835] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:11.474 20:06:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.474 20:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 72976 00:11:11.474 20:06:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 72976 ']' 00:11:11.474 20:06:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 72976 00:11:11.474 20:06:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:11:11.474 20:06:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:11.474 20:06:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72976 00:11:11.474 20:06:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:11.474 20:06:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:11.474 20:06:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72976' 00:11:11.474 killing process with pid 72976 00:11:11.474 20:06:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 72976 00:11:11.474 [2024-12-08 20:06:43.448463] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:11.474 20:06:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 72976 00:11:12.040 [2024-12-08 20:06:43.832836] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:12.980 20:06:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:11:12.980 00:11:12.980 real 0m10.942s 00:11:12.980 user 0m17.321s 00:11:12.980 sys 0m1.906s 00:11:12.980 20:06:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:12.980 ************************************ 00:11:12.980 END TEST raid_state_function_test 00:11:12.980 ************************************ 00:11:12.980 20:06:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.980 20:06:44 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:11:12.980 20:06:44 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:12.980 20:06:44 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:12.980 20:06:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:13.240 ************************************ 00:11:13.240 START TEST raid_state_function_test_sb 00:11:13.240 ************************************ 00:11:13.240 20:06:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 4 true 00:11:13.240 20:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:11:13.240 20:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:11:13.240 20:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:11:13.240 20:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:13.240 20:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:13.240 20:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:13.240 20:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:13.240 20:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:13.240 20:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:13.240 20:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:13.240 20:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:13.240 20:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:13.240 20:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:13.240 20:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:13.240 20:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:13.240 20:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:11:13.240 20:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:13.240 20:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:13.240 20:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:13.240 20:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:13.240 20:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:13.240 20:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:13.240 20:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:13.240 20:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:13.240 20:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:11:13.240 20:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:11:13.240 20:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:11:13.240 20:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:11:13.240 Process raid pid: 73642 00:11:13.240 20:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=73642 00:11:13.240 20:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:13.240 20:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73642' 00:11:13.240 20:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 73642 00:11:13.240 20:06:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 73642 ']' 00:11:13.240 20:06:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:13.240 20:06:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:13.240 20:06:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:13.240 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:13.240 20:06:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:13.240 20:06:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.240 [2024-12-08 20:06:45.059093] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:11:13.240 [2024-12-08 20:06:45.059294] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:13.500 [2024-12-08 20:06:45.230305] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:13.500 [2024-12-08 20:06:45.341897] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:13.759 [2024-12-08 20:06:45.543089] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:13.759 [2024-12-08 20:06:45.543122] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:14.018 20:06:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:14.018 20:06:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:11:14.018 20:06:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:14.018 20:06:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.018 20:06:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.018 [2024-12-08 20:06:45.890452] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:14.018 [2024-12-08 20:06:45.890552] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:14.018 [2024-12-08 20:06:45.890585] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:14.019 [2024-12-08 20:06:45.890610] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:14.019 [2024-12-08 20:06:45.890629] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:14.019 [2024-12-08 20:06:45.890651] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:14.019 [2024-12-08 20:06:45.890670] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:14.019 [2024-12-08 20:06:45.890691] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:14.019 20:06:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.019 20:06:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:14.019 20:06:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:14.019 20:06:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:14.019 20:06:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:14.019 20:06:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:14.019 20:06:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:14.019 20:06:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:14.019 20:06:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:14.019 20:06:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:14.019 20:06:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:14.019 20:06:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:14.019 20:06:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:14.019 20:06:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.019 20:06:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.019 20:06:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.019 20:06:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:14.019 "name": "Existed_Raid", 00:11:14.019 "uuid": "939add1e-13b5-4a02-9780-60a6dbe0b632", 00:11:14.019 "strip_size_kb": 0, 00:11:14.019 "state": "configuring", 00:11:14.019 "raid_level": "raid1", 00:11:14.019 "superblock": true, 00:11:14.019 "num_base_bdevs": 4, 00:11:14.019 "num_base_bdevs_discovered": 0, 00:11:14.019 "num_base_bdevs_operational": 4, 00:11:14.019 "base_bdevs_list": [ 00:11:14.019 { 00:11:14.019 "name": "BaseBdev1", 00:11:14.019 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:14.019 "is_configured": false, 00:11:14.019 "data_offset": 0, 00:11:14.019 "data_size": 0 00:11:14.019 }, 00:11:14.019 { 00:11:14.019 "name": "BaseBdev2", 00:11:14.019 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:14.019 "is_configured": false, 00:11:14.019 "data_offset": 0, 00:11:14.019 "data_size": 0 00:11:14.019 }, 00:11:14.019 { 00:11:14.019 "name": "BaseBdev3", 00:11:14.019 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:14.019 "is_configured": false, 00:11:14.019 "data_offset": 0, 00:11:14.019 "data_size": 0 00:11:14.019 }, 00:11:14.019 { 00:11:14.019 "name": "BaseBdev4", 00:11:14.019 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:14.019 "is_configured": false, 00:11:14.019 "data_offset": 0, 00:11:14.019 "data_size": 0 00:11:14.019 } 00:11:14.019 ] 00:11:14.019 }' 00:11:14.019 20:06:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:14.019 20:06:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.588 20:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:14.588 20:06:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.588 20:06:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.588 [2024-12-08 20:06:46.309659] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:14.588 [2024-12-08 20:06:46.309701] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:14.588 20:06:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.588 20:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:14.588 20:06:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.588 20:06:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.588 [2024-12-08 20:06:46.321631] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:14.588 [2024-12-08 20:06:46.321674] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:14.588 [2024-12-08 20:06:46.321683] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:14.588 [2024-12-08 20:06:46.321692] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:14.588 [2024-12-08 20:06:46.321699] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:14.588 [2024-12-08 20:06:46.321707] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:14.588 [2024-12-08 20:06:46.321713] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:14.588 [2024-12-08 20:06:46.321721] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:14.588 20:06:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.588 20:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:14.588 20:06:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.588 20:06:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.589 [2024-12-08 20:06:46.366440] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:14.589 BaseBdev1 00:11:14.589 20:06:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.589 20:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:14.589 20:06:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:14.589 20:06:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:14.589 20:06:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:14.589 20:06:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:14.589 20:06:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:14.589 20:06:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:14.589 20:06:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.589 20:06:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.589 20:06:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.589 20:06:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:14.589 20:06:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.589 20:06:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.589 [ 00:11:14.589 { 00:11:14.589 "name": "BaseBdev1", 00:11:14.589 "aliases": [ 00:11:14.589 "9e32e6b5-4a02-4cdb-83a1-7502fae55a4e" 00:11:14.589 ], 00:11:14.589 "product_name": "Malloc disk", 00:11:14.589 "block_size": 512, 00:11:14.589 "num_blocks": 65536, 00:11:14.589 "uuid": "9e32e6b5-4a02-4cdb-83a1-7502fae55a4e", 00:11:14.589 "assigned_rate_limits": { 00:11:14.589 "rw_ios_per_sec": 0, 00:11:14.589 "rw_mbytes_per_sec": 0, 00:11:14.589 "r_mbytes_per_sec": 0, 00:11:14.589 "w_mbytes_per_sec": 0 00:11:14.589 }, 00:11:14.589 "claimed": true, 00:11:14.589 "claim_type": "exclusive_write", 00:11:14.589 "zoned": false, 00:11:14.589 "supported_io_types": { 00:11:14.589 "read": true, 00:11:14.589 "write": true, 00:11:14.589 "unmap": true, 00:11:14.589 "flush": true, 00:11:14.589 "reset": true, 00:11:14.589 "nvme_admin": false, 00:11:14.589 "nvme_io": false, 00:11:14.589 "nvme_io_md": false, 00:11:14.589 "write_zeroes": true, 00:11:14.589 "zcopy": true, 00:11:14.589 "get_zone_info": false, 00:11:14.589 "zone_management": false, 00:11:14.589 "zone_append": false, 00:11:14.589 "compare": false, 00:11:14.589 "compare_and_write": false, 00:11:14.589 "abort": true, 00:11:14.589 "seek_hole": false, 00:11:14.589 "seek_data": false, 00:11:14.589 "copy": true, 00:11:14.589 "nvme_iov_md": false 00:11:14.589 }, 00:11:14.589 "memory_domains": [ 00:11:14.589 { 00:11:14.589 "dma_device_id": "system", 00:11:14.589 "dma_device_type": 1 00:11:14.589 }, 00:11:14.589 { 00:11:14.589 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:14.589 "dma_device_type": 2 00:11:14.589 } 00:11:14.589 ], 00:11:14.589 "driver_specific": {} 00:11:14.589 } 00:11:14.589 ] 00:11:14.589 20:06:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.589 20:06:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:14.589 20:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:14.589 20:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:14.589 20:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:14.589 20:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:14.589 20:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:14.589 20:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:14.589 20:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:14.589 20:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:14.589 20:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:14.589 20:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:14.589 20:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:14.589 20:06:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.589 20:06:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.589 20:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:14.589 20:06:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.589 20:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:14.589 "name": "Existed_Raid", 00:11:14.589 "uuid": "046dfb64-d4d0-496e-957a-87faef44c287", 00:11:14.589 "strip_size_kb": 0, 00:11:14.589 "state": "configuring", 00:11:14.589 "raid_level": "raid1", 00:11:14.589 "superblock": true, 00:11:14.589 "num_base_bdevs": 4, 00:11:14.589 "num_base_bdevs_discovered": 1, 00:11:14.589 "num_base_bdevs_operational": 4, 00:11:14.589 "base_bdevs_list": [ 00:11:14.589 { 00:11:14.589 "name": "BaseBdev1", 00:11:14.589 "uuid": "9e32e6b5-4a02-4cdb-83a1-7502fae55a4e", 00:11:14.589 "is_configured": true, 00:11:14.589 "data_offset": 2048, 00:11:14.589 "data_size": 63488 00:11:14.589 }, 00:11:14.589 { 00:11:14.589 "name": "BaseBdev2", 00:11:14.589 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:14.589 "is_configured": false, 00:11:14.589 "data_offset": 0, 00:11:14.589 "data_size": 0 00:11:14.589 }, 00:11:14.589 { 00:11:14.589 "name": "BaseBdev3", 00:11:14.589 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:14.589 "is_configured": false, 00:11:14.589 "data_offset": 0, 00:11:14.589 "data_size": 0 00:11:14.589 }, 00:11:14.589 { 00:11:14.589 "name": "BaseBdev4", 00:11:14.589 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:14.589 "is_configured": false, 00:11:14.589 "data_offset": 0, 00:11:14.589 "data_size": 0 00:11:14.589 } 00:11:14.589 ] 00:11:14.589 }' 00:11:14.589 20:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:14.589 20:06:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.849 20:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:14.849 20:06:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.849 20:06:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.849 [2024-12-08 20:06:46.813727] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:14.849 [2024-12-08 20:06:46.813783] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:14.849 20:06:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.849 20:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:14.849 20:06:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.849 20:06:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.849 [2024-12-08 20:06:46.821771] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:14.849 [2024-12-08 20:06:46.823635] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:14.849 [2024-12-08 20:06:46.823674] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:14.849 [2024-12-08 20:06:46.823683] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:14.849 [2024-12-08 20:06:46.823694] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:14.849 [2024-12-08 20:06:46.823700] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:14.849 [2024-12-08 20:06:46.823708] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:15.109 20:06:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.109 20:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:15.109 20:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:15.109 20:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:15.109 20:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:15.109 20:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:15.109 20:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:15.109 20:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:15.109 20:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:15.109 20:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:15.109 20:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:15.109 20:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:15.109 20:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:15.109 20:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:15.109 20:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:15.109 20:06:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.109 20:06:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.109 20:06:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.109 20:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:15.109 "name": "Existed_Raid", 00:11:15.109 "uuid": "f4dc7c88-38d0-43e2-aa9b-6fdce906e9fa", 00:11:15.109 "strip_size_kb": 0, 00:11:15.109 "state": "configuring", 00:11:15.109 "raid_level": "raid1", 00:11:15.109 "superblock": true, 00:11:15.109 "num_base_bdevs": 4, 00:11:15.109 "num_base_bdevs_discovered": 1, 00:11:15.109 "num_base_bdevs_operational": 4, 00:11:15.109 "base_bdevs_list": [ 00:11:15.109 { 00:11:15.110 "name": "BaseBdev1", 00:11:15.110 "uuid": "9e32e6b5-4a02-4cdb-83a1-7502fae55a4e", 00:11:15.110 "is_configured": true, 00:11:15.110 "data_offset": 2048, 00:11:15.110 "data_size": 63488 00:11:15.110 }, 00:11:15.110 { 00:11:15.110 "name": "BaseBdev2", 00:11:15.110 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:15.110 "is_configured": false, 00:11:15.110 "data_offset": 0, 00:11:15.110 "data_size": 0 00:11:15.110 }, 00:11:15.110 { 00:11:15.110 "name": "BaseBdev3", 00:11:15.110 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:15.110 "is_configured": false, 00:11:15.110 "data_offset": 0, 00:11:15.110 "data_size": 0 00:11:15.110 }, 00:11:15.110 { 00:11:15.110 "name": "BaseBdev4", 00:11:15.110 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:15.110 "is_configured": false, 00:11:15.110 "data_offset": 0, 00:11:15.110 "data_size": 0 00:11:15.110 } 00:11:15.110 ] 00:11:15.110 }' 00:11:15.110 20:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:15.110 20:06:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.370 20:06:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:15.370 20:06:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.370 20:06:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.370 [2024-12-08 20:06:47.326752] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:15.370 BaseBdev2 00:11:15.370 20:06:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.370 20:06:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:15.370 20:06:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:15.370 20:06:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:15.370 20:06:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:15.370 20:06:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:15.370 20:06:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:15.370 20:06:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:15.370 20:06:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.370 20:06:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.370 20:06:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.370 20:06:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:15.370 20:06:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.370 20:06:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.636 [ 00:11:15.636 { 00:11:15.636 "name": "BaseBdev2", 00:11:15.636 "aliases": [ 00:11:15.636 "4db35c87-9201-4329-a4de-5b68922807c9" 00:11:15.636 ], 00:11:15.636 "product_name": "Malloc disk", 00:11:15.636 "block_size": 512, 00:11:15.636 "num_blocks": 65536, 00:11:15.636 "uuid": "4db35c87-9201-4329-a4de-5b68922807c9", 00:11:15.636 "assigned_rate_limits": { 00:11:15.636 "rw_ios_per_sec": 0, 00:11:15.636 "rw_mbytes_per_sec": 0, 00:11:15.636 "r_mbytes_per_sec": 0, 00:11:15.636 "w_mbytes_per_sec": 0 00:11:15.636 }, 00:11:15.636 "claimed": true, 00:11:15.636 "claim_type": "exclusive_write", 00:11:15.636 "zoned": false, 00:11:15.636 "supported_io_types": { 00:11:15.636 "read": true, 00:11:15.636 "write": true, 00:11:15.636 "unmap": true, 00:11:15.636 "flush": true, 00:11:15.636 "reset": true, 00:11:15.636 "nvme_admin": false, 00:11:15.636 "nvme_io": false, 00:11:15.636 "nvme_io_md": false, 00:11:15.636 "write_zeroes": true, 00:11:15.636 "zcopy": true, 00:11:15.636 "get_zone_info": false, 00:11:15.636 "zone_management": false, 00:11:15.636 "zone_append": false, 00:11:15.636 "compare": false, 00:11:15.636 "compare_and_write": false, 00:11:15.636 "abort": true, 00:11:15.636 "seek_hole": false, 00:11:15.636 "seek_data": false, 00:11:15.636 "copy": true, 00:11:15.636 "nvme_iov_md": false 00:11:15.636 }, 00:11:15.636 "memory_domains": [ 00:11:15.636 { 00:11:15.636 "dma_device_id": "system", 00:11:15.636 "dma_device_type": 1 00:11:15.636 }, 00:11:15.636 { 00:11:15.636 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:15.636 "dma_device_type": 2 00:11:15.636 } 00:11:15.636 ], 00:11:15.636 "driver_specific": {} 00:11:15.636 } 00:11:15.636 ] 00:11:15.636 20:06:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.636 20:06:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:15.636 20:06:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:15.636 20:06:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:15.636 20:06:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:15.636 20:06:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:15.636 20:06:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:15.636 20:06:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:15.636 20:06:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:15.636 20:06:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:15.636 20:06:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:15.636 20:06:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:15.636 20:06:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:15.636 20:06:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:15.636 20:06:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:15.636 20:06:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:15.636 20:06:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.636 20:06:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.636 20:06:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.636 20:06:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:15.636 "name": "Existed_Raid", 00:11:15.636 "uuid": "f4dc7c88-38d0-43e2-aa9b-6fdce906e9fa", 00:11:15.636 "strip_size_kb": 0, 00:11:15.636 "state": "configuring", 00:11:15.636 "raid_level": "raid1", 00:11:15.636 "superblock": true, 00:11:15.637 "num_base_bdevs": 4, 00:11:15.637 "num_base_bdevs_discovered": 2, 00:11:15.637 "num_base_bdevs_operational": 4, 00:11:15.637 "base_bdevs_list": [ 00:11:15.637 { 00:11:15.637 "name": "BaseBdev1", 00:11:15.637 "uuid": "9e32e6b5-4a02-4cdb-83a1-7502fae55a4e", 00:11:15.637 "is_configured": true, 00:11:15.637 "data_offset": 2048, 00:11:15.637 "data_size": 63488 00:11:15.637 }, 00:11:15.637 { 00:11:15.637 "name": "BaseBdev2", 00:11:15.637 "uuid": "4db35c87-9201-4329-a4de-5b68922807c9", 00:11:15.637 "is_configured": true, 00:11:15.637 "data_offset": 2048, 00:11:15.637 "data_size": 63488 00:11:15.637 }, 00:11:15.637 { 00:11:15.637 "name": "BaseBdev3", 00:11:15.637 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:15.637 "is_configured": false, 00:11:15.637 "data_offset": 0, 00:11:15.637 "data_size": 0 00:11:15.637 }, 00:11:15.637 { 00:11:15.637 "name": "BaseBdev4", 00:11:15.637 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:15.637 "is_configured": false, 00:11:15.637 "data_offset": 0, 00:11:15.637 "data_size": 0 00:11:15.637 } 00:11:15.637 ] 00:11:15.637 }' 00:11:15.637 20:06:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:15.637 20:06:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.902 20:06:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:15.902 20:06:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.902 20:06:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.161 [2024-12-08 20:06:47.879152] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:16.161 BaseBdev3 00:11:16.161 20:06:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.161 20:06:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:16.161 20:06:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:16.161 20:06:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:16.161 20:06:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:16.161 20:06:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:16.161 20:06:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:16.161 20:06:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:16.162 20:06:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.162 20:06:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.162 20:06:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.162 20:06:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:16.162 20:06:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.162 20:06:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.162 [ 00:11:16.162 { 00:11:16.162 "name": "BaseBdev3", 00:11:16.162 "aliases": [ 00:11:16.162 "702160dd-771e-4ab9-9d1c-da7955c92e70" 00:11:16.162 ], 00:11:16.162 "product_name": "Malloc disk", 00:11:16.162 "block_size": 512, 00:11:16.162 "num_blocks": 65536, 00:11:16.162 "uuid": "702160dd-771e-4ab9-9d1c-da7955c92e70", 00:11:16.162 "assigned_rate_limits": { 00:11:16.162 "rw_ios_per_sec": 0, 00:11:16.162 "rw_mbytes_per_sec": 0, 00:11:16.162 "r_mbytes_per_sec": 0, 00:11:16.162 "w_mbytes_per_sec": 0 00:11:16.162 }, 00:11:16.162 "claimed": true, 00:11:16.162 "claim_type": "exclusive_write", 00:11:16.162 "zoned": false, 00:11:16.162 "supported_io_types": { 00:11:16.162 "read": true, 00:11:16.162 "write": true, 00:11:16.162 "unmap": true, 00:11:16.162 "flush": true, 00:11:16.162 "reset": true, 00:11:16.162 "nvme_admin": false, 00:11:16.162 "nvme_io": false, 00:11:16.162 "nvme_io_md": false, 00:11:16.162 "write_zeroes": true, 00:11:16.162 "zcopy": true, 00:11:16.162 "get_zone_info": false, 00:11:16.162 "zone_management": false, 00:11:16.162 "zone_append": false, 00:11:16.162 "compare": false, 00:11:16.162 "compare_and_write": false, 00:11:16.162 "abort": true, 00:11:16.162 "seek_hole": false, 00:11:16.162 "seek_data": false, 00:11:16.162 "copy": true, 00:11:16.162 "nvme_iov_md": false 00:11:16.162 }, 00:11:16.162 "memory_domains": [ 00:11:16.162 { 00:11:16.162 "dma_device_id": "system", 00:11:16.162 "dma_device_type": 1 00:11:16.162 }, 00:11:16.162 { 00:11:16.162 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:16.162 "dma_device_type": 2 00:11:16.162 } 00:11:16.162 ], 00:11:16.162 "driver_specific": {} 00:11:16.162 } 00:11:16.162 ] 00:11:16.162 20:06:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.162 20:06:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:16.162 20:06:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:16.162 20:06:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:16.162 20:06:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:16.162 20:06:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:16.162 20:06:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:16.162 20:06:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:16.162 20:06:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:16.162 20:06:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:16.162 20:06:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:16.162 20:06:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:16.162 20:06:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:16.162 20:06:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:16.162 20:06:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:16.162 20:06:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:16.162 20:06:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.162 20:06:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.162 20:06:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.162 20:06:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:16.162 "name": "Existed_Raid", 00:11:16.162 "uuid": "f4dc7c88-38d0-43e2-aa9b-6fdce906e9fa", 00:11:16.162 "strip_size_kb": 0, 00:11:16.162 "state": "configuring", 00:11:16.162 "raid_level": "raid1", 00:11:16.162 "superblock": true, 00:11:16.162 "num_base_bdevs": 4, 00:11:16.162 "num_base_bdevs_discovered": 3, 00:11:16.162 "num_base_bdevs_operational": 4, 00:11:16.162 "base_bdevs_list": [ 00:11:16.162 { 00:11:16.162 "name": "BaseBdev1", 00:11:16.162 "uuid": "9e32e6b5-4a02-4cdb-83a1-7502fae55a4e", 00:11:16.162 "is_configured": true, 00:11:16.162 "data_offset": 2048, 00:11:16.162 "data_size": 63488 00:11:16.162 }, 00:11:16.162 { 00:11:16.162 "name": "BaseBdev2", 00:11:16.162 "uuid": "4db35c87-9201-4329-a4de-5b68922807c9", 00:11:16.162 "is_configured": true, 00:11:16.162 "data_offset": 2048, 00:11:16.162 "data_size": 63488 00:11:16.162 }, 00:11:16.162 { 00:11:16.162 "name": "BaseBdev3", 00:11:16.162 "uuid": "702160dd-771e-4ab9-9d1c-da7955c92e70", 00:11:16.162 "is_configured": true, 00:11:16.162 "data_offset": 2048, 00:11:16.162 "data_size": 63488 00:11:16.162 }, 00:11:16.162 { 00:11:16.162 "name": "BaseBdev4", 00:11:16.162 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:16.162 "is_configured": false, 00:11:16.162 "data_offset": 0, 00:11:16.162 "data_size": 0 00:11:16.162 } 00:11:16.162 ] 00:11:16.162 }' 00:11:16.162 20:06:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:16.162 20:06:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.423 20:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:16.423 20:06:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.423 20:06:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.423 [2024-12-08 20:06:48.359777] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:16.423 [2024-12-08 20:06:48.360114] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:16.423 [2024-12-08 20:06:48.360133] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:16.423 [2024-12-08 20:06:48.360397] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:16.423 [2024-12-08 20:06:48.360588] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:16.423 [2024-12-08 20:06:48.360604] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:16.423 BaseBdev4 00:11:16.423 [2024-12-08 20:06:48.360784] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:16.423 20:06:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.423 20:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:11:16.423 20:06:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:16.423 20:06:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:16.423 20:06:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:16.423 20:06:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:16.423 20:06:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:16.423 20:06:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:16.423 20:06:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.423 20:06:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.423 20:06:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.423 20:06:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:16.423 20:06:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.423 20:06:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.423 [ 00:11:16.423 { 00:11:16.423 "name": "BaseBdev4", 00:11:16.423 "aliases": [ 00:11:16.423 "cf3a4149-e1a5-44fa-85d7-8117ebced891" 00:11:16.423 ], 00:11:16.423 "product_name": "Malloc disk", 00:11:16.423 "block_size": 512, 00:11:16.423 "num_blocks": 65536, 00:11:16.423 "uuid": "cf3a4149-e1a5-44fa-85d7-8117ebced891", 00:11:16.423 "assigned_rate_limits": { 00:11:16.423 "rw_ios_per_sec": 0, 00:11:16.423 "rw_mbytes_per_sec": 0, 00:11:16.423 "r_mbytes_per_sec": 0, 00:11:16.423 "w_mbytes_per_sec": 0 00:11:16.423 }, 00:11:16.423 "claimed": true, 00:11:16.423 "claim_type": "exclusive_write", 00:11:16.423 "zoned": false, 00:11:16.423 "supported_io_types": { 00:11:16.423 "read": true, 00:11:16.423 "write": true, 00:11:16.423 "unmap": true, 00:11:16.423 "flush": true, 00:11:16.423 "reset": true, 00:11:16.423 "nvme_admin": false, 00:11:16.423 "nvme_io": false, 00:11:16.423 "nvme_io_md": false, 00:11:16.423 "write_zeroes": true, 00:11:16.423 "zcopy": true, 00:11:16.423 "get_zone_info": false, 00:11:16.423 "zone_management": false, 00:11:16.423 "zone_append": false, 00:11:16.423 "compare": false, 00:11:16.423 "compare_and_write": false, 00:11:16.423 "abort": true, 00:11:16.423 "seek_hole": false, 00:11:16.423 "seek_data": false, 00:11:16.423 "copy": true, 00:11:16.423 "nvme_iov_md": false 00:11:16.423 }, 00:11:16.423 "memory_domains": [ 00:11:16.423 { 00:11:16.423 "dma_device_id": "system", 00:11:16.423 "dma_device_type": 1 00:11:16.423 }, 00:11:16.423 { 00:11:16.423 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:16.423 "dma_device_type": 2 00:11:16.423 } 00:11:16.423 ], 00:11:16.423 "driver_specific": {} 00:11:16.423 } 00:11:16.423 ] 00:11:16.423 20:06:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.423 20:06:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:16.423 20:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:16.683 20:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:16.683 20:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:11:16.683 20:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:16.683 20:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:16.683 20:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:16.683 20:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:16.683 20:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:16.683 20:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:16.683 20:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:16.683 20:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:16.683 20:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:16.683 20:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:16.683 20:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:16.683 20:06:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.683 20:06:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.683 20:06:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.683 20:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:16.683 "name": "Existed_Raid", 00:11:16.683 "uuid": "f4dc7c88-38d0-43e2-aa9b-6fdce906e9fa", 00:11:16.683 "strip_size_kb": 0, 00:11:16.683 "state": "online", 00:11:16.683 "raid_level": "raid1", 00:11:16.683 "superblock": true, 00:11:16.683 "num_base_bdevs": 4, 00:11:16.683 "num_base_bdevs_discovered": 4, 00:11:16.683 "num_base_bdevs_operational": 4, 00:11:16.683 "base_bdevs_list": [ 00:11:16.683 { 00:11:16.683 "name": "BaseBdev1", 00:11:16.683 "uuid": "9e32e6b5-4a02-4cdb-83a1-7502fae55a4e", 00:11:16.683 "is_configured": true, 00:11:16.683 "data_offset": 2048, 00:11:16.683 "data_size": 63488 00:11:16.683 }, 00:11:16.683 { 00:11:16.683 "name": "BaseBdev2", 00:11:16.683 "uuid": "4db35c87-9201-4329-a4de-5b68922807c9", 00:11:16.683 "is_configured": true, 00:11:16.683 "data_offset": 2048, 00:11:16.683 "data_size": 63488 00:11:16.683 }, 00:11:16.683 { 00:11:16.683 "name": "BaseBdev3", 00:11:16.683 "uuid": "702160dd-771e-4ab9-9d1c-da7955c92e70", 00:11:16.683 "is_configured": true, 00:11:16.683 "data_offset": 2048, 00:11:16.683 "data_size": 63488 00:11:16.683 }, 00:11:16.683 { 00:11:16.683 "name": "BaseBdev4", 00:11:16.683 "uuid": "cf3a4149-e1a5-44fa-85d7-8117ebced891", 00:11:16.683 "is_configured": true, 00:11:16.683 "data_offset": 2048, 00:11:16.683 "data_size": 63488 00:11:16.683 } 00:11:16.683 ] 00:11:16.683 }' 00:11:16.683 20:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:16.684 20:06:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.946 20:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:16.946 20:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:16.946 20:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:16.946 20:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:16.946 20:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:16.946 20:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:16.946 20:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:16.946 20:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:16.946 20:06:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.946 20:06:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.946 [2024-12-08 20:06:48.867344] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:16.946 20:06:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.946 20:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:16.946 "name": "Existed_Raid", 00:11:16.946 "aliases": [ 00:11:16.946 "f4dc7c88-38d0-43e2-aa9b-6fdce906e9fa" 00:11:16.946 ], 00:11:16.946 "product_name": "Raid Volume", 00:11:16.946 "block_size": 512, 00:11:16.946 "num_blocks": 63488, 00:11:16.946 "uuid": "f4dc7c88-38d0-43e2-aa9b-6fdce906e9fa", 00:11:16.946 "assigned_rate_limits": { 00:11:16.946 "rw_ios_per_sec": 0, 00:11:16.946 "rw_mbytes_per_sec": 0, 00:11:16.946 "r_mbytes_per_sec": 0, 00:11:16.946 "w_mbytes_per_sec": 0 00:11:16.946 }, 00:11:16.946 "claimed": false, 00:11:16.946 "zoned": false, 00:11:16.946 "supported_io_types": { 00:11:16.946 "read": true, 00:11:16.946 "write": true, 00:11:16.946 "unmap": false, 00:11:16.946 "flush": false, 00:11:16.946 "reset": true, 00:11:16.946 "nvme_admin": false, 00:11:16.946 "nvme_io": false, 00:11:16.946 "nvme_io_md": false, 00:11:16.946 "write_zeroes": true, 00:11:16.946 "zcopy": false, 00:11:16.946 "get_zone_info": false, 00:11:16.946 "zone_management": false, 00:11:16.946 "zone_append": false, 00:11:16.946 "compare": false, 00:11:16.946 "compare_and_write": false, 00:11:16.946 "abort": false, 00:11:16.946 "seek_hole": false, 00:11:16.946 "seek_data": false, 00:11:16.946 "copy": false, 00:11:16.946 "nvme_iov_md": false 00:11:16.946 }, 00:11:16.946 "memory_domains": [ 00:11:16.946 { 00:11:16.946 "dma_device_id": "system", 00:11:16.946 "dma_device_type": 1 00:11:16.946 }, 00:11:16.946 { 00:11:16.946 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:16.946 "dma_device_type": 2 00:11:16.946 }, 00:11:16.946 { 00:11:16.946 "dma_device_id": "system", 00:11:16.946 "dma_device_type": 1 00:11:16.946 }, 00:11:16.946 { 00:11:16.946 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:16.946 "dma_device_type": 2 00:11:16.946 }, 00:11:16.946 { 00:11:16.946 "dma_device_id": "system", 00:11:16.946 "dma_device_type": 1 00:11:16.946 }, 00:11:16.946 { 00:11:16.946 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:16.946 "dma_device_type": 2 00:11:16.946 }, 00:11:16.946 { 00:11:16.946 "dma_device_id": "system", 00:11:16.946 "dma_device_type": 1 00:11:16.946 }, 00:11:16.946 { 00:11:16.946 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:16.946 "dma_device_type": 2 00:11:16.946 } 00:11:16.946 ], 00:11:16.946 "driver_specific": { 00:11:16.946 "raid": { 00:11:16.946 "uuid": "f4dc7c88-38d0-43e2-aa9b-6fdce906e9fa", 00:11:16.946 "strip_size_kb": 0, 00:11:16.946 "state": "online", 00:11:16.946 "raid_level": "raid1", 00:11:16.946 "superblock": true, 00:11:16.946 "num_base_bdevs": 4, 00:11:16.946 "num_base_bdevs_discovered": 4, 00:11:16.946 "num_base_bdevs_operational": 4, 00:11:16.946 "base_bdevs_list": [ 00:11:16.946 { 00:11:16.946 "name": "BaseBdev1", 00:11:16.946 "uuid": "9e32e6b5-4a02-4cdb-83a1-7502fae55a4e", 00:11:16.946 "is_configured": true, 00:11:16.946 "data_offset": 2048, 00:11:16.946 "data_size": 63488 00:11:16.946 }, 00:11:16.946 { 00:11:16.946 "name": "BaseBdev2", 00:11:16.946 "uuid": "4db35c87-9201-4329-a4de-5b68922807c9", 00:11:16.946 "is_configured": true, 00:11:16.946 "data_offset": 2048, 00:11:16.946 "data_size": 63488 00:11:16.946 }, 00:11:16.946 { 00:11:16.946 "name": "BaseBdev3", 00:11:16.946 "uuid": "702160dd-771e-4ab9-9d1c-da7955c92e70", 00:11:16.946 "is_configured": true, 00:11:16.946 "data_offset": 2048, 00:11:16.946 "data_size": 63488 00:11:16.946 }, 00:11:16.946 { 00:11:16.946 "name": "BaseBdev4", 00:11:16.946 "uuid": "cf3a4149-e1a5-44fa-85d7-8117ebced891", 00:11:16.946 "is_configured": true, 00:11:16.946 "data_offset": 2048, 00:11:16.946 "data_size": 63488 00:11:16.946 } 00:11:16.946 ] 00:11:16.946 } 00:11:16.946 } 00:11:16.946 }' 00:11:16.946 20:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:17.206 20:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:17.206 BaseBdev2 00:11:17.206 BaseBdev3 00:11:17.206 BaseBdev4' 00:11:17.206 20:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:17.206 20:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:17.206 20:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:17.206 20:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:17.206 20:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:17.206 20:06:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.206 20:06:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.206 20:06:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.206 20:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:17.206 20:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:17.206 20:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:17.206 20:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:17.206 20:06:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.206 20:06:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.206 20:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:17.207 20:06:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.207 20:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:17.207 20:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:17.207 20:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:17.207 20:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:17.207 20:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:17.207 20:06:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.207 20:06:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.207 20:06:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.207 20:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:17.207 20:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:17.207 20:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:17.207 20:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:17.207 20:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:17.207 20:06:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.207 20:06:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.207 20:06:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.207 20:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:17.207 20:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:17.207 20:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:17.207 20:06:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.207 20:06:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.207 [2024-12-08 20:06:49.114555] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:17.467 20:06:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.467 20:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:17.467 20:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:11:17.467 20:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:17.467 20:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:11:17.467 20:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:11:17.467 20:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:11:17.467 20:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:17.467 20:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:17.467 20:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:17.467 20:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:17.467 20:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:17.467 20:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:17.467 20:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:17.467 20:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:17.467 20:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:17.467 20:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.467 20:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:17.467 20:06:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.467 20:06:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.467 20:06:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.467 20:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:17.467 "name": "Existed_Raid", 00:11:17.467 "uuid": "f4dc7c88-38d0-43e2-aa9b-6fdce906e9fa", 00:11:17.467 "strip_size_kb": 0, 00:11:17.467 "state": "online", 00:11:17.467 "raid_level": "raid1", 00:11:17.467 "superblock": true, 00:11:17.467 "num_base_bdevs": 4, 00:11:17.467 "num_base_bdevs_discovered": 3, 00:11:17.467 "num_base_bdevs_operational": 3, 00:11:17.467 "base_bdevs_list": [ 00:11:17.467 { 00:11:17.467 "name": null, 00:11:17.467 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:17.467 "is_configured": false, 00:11:17.467 "data_offset": 0, 00:11:17.467 "data_size": 63488 00:11:17.467 }, 00:11:17.467 { 00:11:17.467 "name": "BaseBdev2", 00:11:17.467 "uuid": "4db35c87-9201-4329-a4de-5b68922807c9", 00:11:17.467 "is_configured": true, 00:11:17.467 "data_offset": 2048, 00:11:17.467 "data_size": 63488 00:11:17.467 }, 00:11:17.467 { 00:11:17.467 "name": "BaseBdev3", 00:11:17.467 "uuid": "702160dd-771e-4ab9-9d1c-da7955c92e70", 00:11:17.467 "is_configured": true, 00:11:17.467 "data_offset": 2048, 00:11:17.467 "data_size": 63488 00:11:17.467 }, 00:11:17.467 { 00:11:17.467 "name": "BaseBdev4", 00:11:17.467 "uuid": "cf3a4149-e1a5-44fa-85d7-8117ebced891", 00:11:17.467 "is_configured": true, 00:11:17.467 "data_offset": 2048, 00:11:17.467 "data_size": 63488 00:11:17.467 } 00:11:17.467 ] 00:11:17.467 }' 00:11:17.467 20:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:17.467 20:06:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.727 20:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:17.727 20:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:17.727 20:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.727 20:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:17.727 20:06:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.727 20:06:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.727 20:06:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.727 20:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:17.727 20:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:17.727 20:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:17.727 20:06:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.727 20:06:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.727 [2024-12-08 20:06:49.670754] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:17.987 20:06:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.987 20:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:17.987 20:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:17.987 20:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.987 20:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:17.987 20:06:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.987 20:06:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.987 20:06:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.987 20:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:17.987 20:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:17.987 20:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:17.987 20:06:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.987 20:06:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.987 [2024-12-08 20:06:49.822750] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:17.987 20:06:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.987 20:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:17.987 20:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:17.987 20:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.987 20:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:17.987 20:06:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.987 20:06:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.987 20:06:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.247 20:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:18.247 20:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:18.247 20:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:11:18.247 20:06:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.247 20:06:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.247 [2024-12-08 20:06:49.974033] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:18.247 [2024-12-08 20:06:49.974136] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:18.247 [2024-12-08 20:06:50.065986] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:18.247 [2024-12-08 20:06:50.066130] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:18.247 [2024-12-08 20:06:50.066148] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:18.247 20:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.247 20:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:18.247 20:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:18.247 20:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.247 20:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:18.247 20:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.247 20:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.247 20:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.247 20:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:18.247 20:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:18.247 20:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:11:18.247 20:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:18.247 20:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:18.247 20:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:18.247 20:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.247 20:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.247 BaseBdev2 00:11:18.247 20:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.247 20:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:18.247 20:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:18.247 20:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:18.247 20:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:18.247 20:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:18.247 20:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:18.247 20:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:18.247 20:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.247 20:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.247 20:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.247 20:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:18.247 20:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.247 20:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.247 [ 00:11:18.247 { 00:11:18.247 "name": "BaseBdev2", 00:11:18.247 "aliases": [ 00:11:18.247 "532f1c65-86d4-4601-abf9-b25ff41b4f62" 00:11:18.247 ], 00:11:18.247 "product_name": "Malloc disk", 00:11:18.247 "block_size": 512, 00:11:18.247 "num_blocks": 65536, 00:11:18.247 "uuid": "532f1c65-86d4-4601-abf9-b25ff41b4f62", 00:11:18.247 "assigned_rate_limits": { 00:11:18.248 "rw_ios_per_sec": 0, 00:11:18.248 "rw_mbytes_per_sec": 0, 00:11:18.248 "r_mbytes_per_sec": 0, 00:11:18.248 "w_mbytes_per_sec": 0 00:11:18.248 }, 00:11:18.248 "claimed": false, 00:11:18.248 "zoned": false, 00:11:18.248 "supported_io_types": { 00:11:18.248 "read": true, 00:11:18.248 "write": true, 00:11:18.248 "unmap": true, 00:11:18.248 "flush": true, 00:11:18.248 "reset": true, 00:11:18.248 "nvme_admin": false, 00:11:18.248 "nvme_io": false, 00:11:18.248 "nvme_io_md": false, 00:11:18.248 "write_zeroes": true, 00:11:18.248 "zcopy": true, 00:11:18.248 "get_zone_info": false, 00:11:18.248 "zone_management": false, 00:11:18.248 "zone_append": false, 00:11:18.248 "compare": false, 00:11:18.248 "compare_and_write": false, 00:11:18.248 "abort": true, 00:11:18.248 "seek_hole": false, 00:11:18.248 "seek_data": false, 00:11:18.248 "copy": true, 00:11:18.248 "nvme_iov_md": false 00:11:18.248 }, 00:11:18.248 "memory_domains": [ 00:11:18.248 { 00:11:18.248 "dma_device_id": "system", 00:11:18.248 "dma_device_type": 1 00:11:18.248 }, 00:11:18.248 { 00:11:18.248 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:18.248 "dma_device_type": 2 00:11:18.248 } 00:11:18.248 ], 00:11:18.248 "driver_specific": {} 00:11:18.248 } 00:11:18.248 ] 00:11:18.248 20:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.248 20:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:18.248 20:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:18.248 20:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:18.248 20:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:18.248 20:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.248 20:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.508 BaseBdev3 00:11:18.508 20:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.508 20:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:18.508 20:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:18.508 20:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:18.508 20:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:18.508 20:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:18.508 20:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:18.508 20:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:18.508 20:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.508 20:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.508 20:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.508 20:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:18.508 20:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.508 20:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.508 [ 00:11:18.508 { 00:11:18.508 "name": "BaseBdev3", 00:11:18.508 "aliases": [ 00:11:18.508 "ac0bd531-a3c9-4f32-9f87-64c03d622ecb" 00:11:18.508 ], 00:11:18.508 "product_name": "Malloc disk", 00:11:18.508 "block_size": 512, 00:11:18.508 "num_blocks": 65536, 00:11:18.508 "uuid": "ac0bd531-a3c9-4f32-9f87-64c03d622ecb", 00:11:18.508 "assigned_rate_limits": { 00:11:18.508 "rw_ios_per_sec": 0, 00:11:18.508 "rw_mbytes_per_sec": 0, 00:11:18.508 "r_mbytes_per_sec": 0, 00:11:18.508 "w_mbytes_per_sec": 0 00:11:18.508 }, 00:11:18.508 "claimed": false, 00:11:18.508 "zoned": false, 00:11:18.508 "supported_io_types": { 00:11:18.508 "read": true, 00:11:18.508 "write": true, 00:11:18.508 "unmap": true, 00:11:18.508 "flush": true, 00:11:18.508 "reset": true, 00:11:18.508 "nvme_admin": false, 00:11:18.508 "nvme_io": false, 00:11:18.508 "nvme_io_md": false, 00:11:18.508 "write_zeroes": true, 00:11:18.508 "zcopy": true, 00:11:18.508 "get_zone_info": false, 00:11:18.508 "zone_management": false, 00:11:18.508 "zone_append": false, 00:11:18.508 "compare": false, 00:11:18.508 "compare_and_write": false, 00:11:18.508 "abort": true, 00:11:18.508 "seek_hole": false, 00:11:18.508 "seek_data": false, 00:11:18.508 "copy": true, 00:11:18.508 "nvme_iov_md": false 00:11:18.508 }, 00:11:18.508 "memory_domains": [ 00:11:18.508 { 00:11:18.508 "dma_device_id": "system", 00:11:18.508 "dma_device_type": 1 00:11:18.508 }, 00:11:18.508 { 00:11:18.508 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:18.508 "dma_device_type": 2 00:11:18.508 } 00:11:18.508 ], 00:11:18.508 "driver_specific": {} 00:11:18.508 } 00:11:18.508 ] 00:11:18.508 20:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.508 20:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:18.508 20:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:18.508 20:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:18.508 20:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:18.508 20:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.508 20:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.508 BaseBdev4 00:11:18.508 20:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.508 20:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:11:18.508 20:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:18.508 20:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:18.508 20:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:18.508 20:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:18.508 20:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:18.508 20:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:18.508 20:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.508 20:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.508 20:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.508 20:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:18.508 20:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.508 20:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.508 [ 00:11:18.508 { 00:11:18.508 "name": "BaseBdev4", 00:11:18.508 "aliases": [ 00:11:18.508 "d5921e98-c9b1-4e5b-a49c-286e8f55503b" 00:11:18.508 ], 00:11:18.508 "product_name": "Malloc disk", 00:11:18.508 "block_size": 512, 00:11:18.508 "num_blocks": 65536, 00:11:18.508 "uuid": "d5921e98-c9b1-4e5b-a49c-286e8f55503b", 00:11:18.508 "assigned_rate_limits": { 00:11:18.508 "rw_ios_per_sec": 0, 00:11:18.508 "rw_mbytes_per_sec": 0, 00:11:18.508 "r_mbytes_per_sec": 0, 00:11:18.508 "w_mbytes_per_sec": 0 00:11:18.508 }, 00:11:18.508 "claimed": false, 00:11:18.508 "zoned": false, 00:11:18.508 "supported_io_types": { 00:11:18.508 "read": true, 00:11:18.508 "write": true, 00:11:18.508 "unmap": true, 00:11:18.509 "flush": true, 00:11:18.509 "reset": true, 00:11:18.509 "nvme_admin": false, 00:11:18.509 "nvme_io": false, 00:11:18.509 "nvme_io_md": false, 00:11:18.509 "write_zeroes": true, 00:11:18.509 "zcopy": true, 00:11:18.509 "get_zone_info": false, 00:11:18.509 "zone_management": false, 00:11:18.509 "zone_append": false, 00:11:18.509 "compare": false, 00:11:18.509 "compare_and_write": false, 00:11:18.509 "abort": true, 00:11:18.509 "seek_hole": false, 00:11:18.509 "seek_data": false, 00:11:18.509 "copy": true, 00:11:18.509 "nvme_iov_md": false 00:11:18.509 }, 00:11:18.509 "memory_domains": [ 00:11:18.509 { 00:11:18.509 "dma_device_id": "system", 00:11:18.509 "dma_device_type": 1 00:11:18.509 }, 00:11:18.509 { 00:11:18.509 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:18.509 "dma_device_type": 2 00:11:18.509 } 00:11:18.509 ], 00:11:18.509 "driver_specific": {} 00:11:18.509 } 00:11:18.509 ] 00:11:18.509 20:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.509 20:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:18.509 20:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:18.509 20:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:18.509 20:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:18.509 20:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.509 20:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.509 [2024-12-08 20:06:50.348326] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:18.509 [2024-12-08 20:06:50.348440] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:18.509 [2024-12-08 20:06:50.348483] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:18.509 [2024-12-08 20:06:50.350327] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:18.509 [2024-12-08 20:06:50.350419] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:18.509 20:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.509 20:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:18.509 20:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:18.509 20:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:18.509 20:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:18.509 20:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:18.509 20:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:18.509 20:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:18.509 20:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:18.509 20:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:18.509 20:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:18.509 20:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.509 20:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:18.509 20:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.509 20:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.509 20:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.509 20:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:18.509 "name": "Existed_Raid", 00:11:18.509 "uuid": "09b944af-0556-49f5-a41d-e8e87477053d", 00:11:18.509 "strip_size_kb": 0, 00:11:18.509 "state": "configuring", 00:11:18.509 "raid_level": "raid1", 00:11:18.509 "superblock": true, 00:11:18.509 "num_base_bdevs": 4, 00:11:18.509 "num_base_bdevs_discovered": 3, 00:11:18.509 "num_base_bdevs_operational": 4, 00:11:18.509 "base_bdevs_list": [ 00:11:18.509 { 00:11:18.509 "name": "BaseBdev1", 00:11:18.509 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:18.509 "is_configured": false, 00:11:18.509 "data_offset": 0, 00:11:18.509 "data_size": 0 00:11:18.509 }, 00:11:18.509 { 00:11:18.509 "name": "BaseBdev2", 00:11:18.509 "uuid": "532f1c65-86d4-4601-abf9-b25ff41b4f62", 00:11:18.509 "is_configured": true, 00:11:18.509 "data_offset": 2048, 00:11:18.509 "data_size": 63488 00:11:18.509 }, 00:11:18.509 { 00:11:18.509 "name": "BaseBdev3", 00:11:18.509 "uuid": "ac0bd531-a3c9-4f32-9f87-64c03d622ecb", 00:11:18.509 "is_configured": true, 00:11:18.509 "data_offset": 2048, 00:11:18.509 "data_size": 63488 00:11:18.509 }, 00:11:18.509 { 00:11:18.509 "name": "BaseBdev4", 00:11:18.509 "uuid": "d5921e98-c9b1-4e5b-a49c-286e8f55503b", 00:11:18.509 "is_configured": true, 00:11:18.509 "data_offset": 2048, 00:11:18.509 "data_size": 63488 00:11:18.509 } 00:11:18.509 ] 00:11:18.509 }' 00:11:18.509 20:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:18.509 20:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.078 20:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:19.078 20:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.078 20:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.078 [2024-12-08 20:06:50.775609] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:19.078 20:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.078 20:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:19.078 20:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:19.078 20:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:19.078 20:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:19.078 20:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:19.078 20:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:19.078 20:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:19.078 20:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:19.078 20:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:19.078 20:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:19.078 20:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:19.078 20:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:19.078 20:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.078 20:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.078 20:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.078 20:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:19.078 "name": "Existed_Raid", 00:11:19.078 "uuid": "09b944af-0556-49f5-a41d-e8e87477053d", 00:11:19.078 "strip_size_kb": 0, 00:11:19.078 "state": "configuring", 00:11:19.078 "raid_level": "raid1", 00:11:19.078 "superblock": true, 00:11:19.078 "num_base_bdevs": 4, 00:11:19.078 "num_base_bdevs_discovered": 2, 00:11:19.078 "num_base_bdevs_operational": 4, 00:11:19.078 "base_bdevs_list": [ 00:11:19.078 { 00:11:19.078 "name": "BaseBdev1", 00:11:19.078 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:19.078 "is_configured": false, 00:11:19.078 "data_offset": 0, 00:11:19.078 "data_size": 0 00:11:19.078 }, 00:11:19.078 { 00:11:19.078 "name": null, 00:11:19.078 "uuid": "532f1c65-86d4-4601-abf9-b25ff41b4f62", 00:11:19.078 "is_configured": false, 00:11:19.078 "data_offset": 0, 00:11:19.078 "data_size": 63488 00:11:19.078 }, 00:11:19.078 { 00:11:19.078 "name": "BaseBdev3", 00:11:19.078 "uuid": "ac0bd531-a3c9-4f32-9f87-64c03d622ecb", 00:11:19.078 "is_configured": true, 00:11:19.078 "data_offset": 2048, 00:11:19.078 "data_size": 63488 00:11:19.078 }, 00:11:19.078 { 00:11:19.078 "name": "BaseBdev4", 00:11:19.078 "uuid": "d5921e98-c9b1-4e5b-a49c-286e8f55503b", 00:11:19.078 "is_configured": true, 00:11:19.078 "data_offset": 2048, 00:11:19.078 "data_size": 63488 00:11:19.078 } 00:11:19.078 ] 00:11:19.078 }' 00:11:19.078 20:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:19.078 20:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.338 20:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:19.338 20:06:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.338 20:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:19.338 20:06:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.338 20:06:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.338 20:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:19.338 20:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:19.338 20:06:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.338 20:06:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.338 [2024-12-08 20:06:51.254545] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:19.338 BaseBdev1 00:11:19.338 20:06:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.338 20:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:19.338 20:06:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:19.338 20:06:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:19.338 20:06:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:19.338 20:06:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:19.338 20:06:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:19.338 20:06:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:19.338 20:06:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.338 20:06:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.338 20:06:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.338 20:06:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:19.338 20:06:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.338 20:06:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.338 [ 00:11:19.338 { 00:11:19.338 "name": "BaseBdev1", 00:11:19.338 "aliases": [ 00:11:19.338 "368adc48-54fc-49eb-a7f4-d56b1c79b083" 00:11:19.338 ], 00:11:19.338 "product_name": "Malloc disk", 00:11:19.338 "block_size": 512, 00:11:19.338 "num_blocks": 65536, 00:11:19.338 "uuid": "368adc48-54fc-49eb-a7f4-d56b1c79b083", 00:11:19.338 "assigned_rate_limits": { 00:11:19.338 "rw_ios_per_sec": 0, 00:11:19.338 "rw_mbytes_per_sec": 0, 00:11:19.338 "r_mbytes_per_sec": 0, 00:11:19.338 "w_mbytes_per_sec": 0 00:11:19.338 }, 00:11:19.338 "claimed": true, 00:11:19.338 "claim_type": "exclusive_write", 00:11:19.338 "zoned": false, 00:11:19.338 "supported_io_types": { 00:11:19.338 "read": true, 00:11:19.338 "write": true, 00:11:19.338 "unmap": true, 00:11:19.338 "flush": true, 00:11:19.338 "reset": true, 00:11:19.338 "nvme_admin": false, 00:11:19.338 "nvme_io": false, 00:11:19.338 "nvme_io_md": false, 00:11:19.338 "write_zeroes": true, 00:11:19.338 "zcopy": true, 00:11:19.338 "get_zone_info": false, 00:11:19.338 "zone_management": false, 00:11:19.338 "zone_append": false, 00:11:19.338 "compare": false, 00:11:19.338 "compare_and_write": false, 00:11:19.338 "abort": true, 00:11:19.338 "seek_hole": false, 00:11:19.339 "seek_data": false, 00:11:19.339 "copy": true, 00:11:19.339 "nvme_iov_md": false 00:11:19.339 }, 00:11:19.339 "memory_domains": [ 00:11:19.339 { 00:11:19.339 "dma_device_id": "system", 00:11:19.339 "dma_device_type": 1 00:11:19.339 }, 00:11:19.339 { 00:11:19.339 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:19.339 "dma_device_type": 2 00:11:19.339 } 00:11:19.339 ], 00:11:19.339 "driver_specific": {} 00:11:19.339 } 00:11:19.339 ] 00:11:19.339 20:06:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.339 20:06:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:19.339 20:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:19.339 20:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:19.339 20:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:19.339 20:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:19.339 20:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:19.339 20:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:19.339 20:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:19.339 20:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:19.339 20:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:19.339 20:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:19.339 20:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:19.339 20:06:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.339 20:06:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.339 20:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:19.598 20:06:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.598 20:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:19.598 "name": "Existed_Raid", 00:11:19.598 "uuid": "09b944af-0556-49f5-a41d-e8e87477053d", 00:11:19.598 "strip_size_kb": 0, 00:11:19.598 "state": "configuring", 00:11:19.598 "raid_level": "raid1", 00:11:19.598 "superblock": true, 00:11:19.598 "num_base_bdevs": 4, 00:11:19.598 "num_base_bdevs_discovered": 3, 00:11:19.598 "num_base_bdevs_operational": 4, 00:11:19.598 "base_bdevs_list": [ 00:11:19.598 { 00:11:19.598 "name": "BaseBdev1", 00:11:19.598 "uuid": "368adc48-54fc-49eb-a7f4-d56b1c79b083", 00:11:19.598 "is_configured": true, 00:11:19.598 "data_offset": 2048, 00:11:19.598 "data_size": 63488 00:11:19.598 }, 00:11:19.598 { 00:11:19.598 "name": null, 00:11:19.598 "uuid": "532f1c65-86d4-4601-abf9-b25ff41b4f62", 00:11:19.598 "is_configured": false, 00:11:19.598 "data_offset": 0, 00:11:19.598 "data_size": 63488 00:11:19.598 }, 00:11:19.598 { 00:11:19.598 "name": "BaseBdev3", 00:11:19.598 "uuid": "ac0bd531-a3c9-4f32-9f87-64c03d622ecb", 00:11:19.598 "is_configured": true, 00:11:19.598 "data_offset": 2048, 00:11:19.598 "data_size": 63488 00:11:19.598 }, 00:11:19.598 { 00:11:19.598 "name": "BaseBdev4", 00:11:19.598 "uuid": "d5921e98-c9b1-4e5b-a49c-286e8f55503b", 00:11:19.598 "is_configured": true, 00:11:19.598 "data_offset": 2048, 00:11:19.598 "data_size": 63488 00:11:19.598 } 00:11:19.598 ] 00:11:19.598 }' 00:11:19.598 20:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:19.598 20:06:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.858 20:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:19.858 20:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:19.858 20:06:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.858 20:06:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.858 20:06:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.858 20:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:19.858 20:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:19.858 20:06:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.858 20:06:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.858 [2024-12-08 20:06:51.753792] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:19.858 20:06:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.858 20:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:19.858 20:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:19.858 20:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:19.858 20:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:19.858 20:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:19.858 20:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:19.858 20:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:19.858 20:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:19.858 20:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:19.858 20:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:19.858 20:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:19.858 20:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:19.858 20:06:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.858 20:06:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.858 20:06:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.858 20:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:19.858 "name": "Existed_Raid", 00:11:19.858 "uuid": "09b944af-0556-49f5-a41d-e8e87477053d", 00:11:19.858 "strip_size_kb": 0, 00:11:19.858 "state": "configuring", 00:11:19.858 "raid_level": "raid1", 00:11:19.858 "superblock": true, 00:11:19.858 "num_base_bdevs": 4, 00:11:19.858 "num_base_bdevs_discovered": 2, 00:11:19.858 "num_base_bdevs_operational": 4, 00:11:19.858 "base_bdevs_list": [ 00:11:19.858 { 00:11:19.858 "name": "BaseBdev1", 00:11:19.858 "uuid": "368adc48-54fc-49eb-a7f4-d56b1c79b083", 00:11:19.858 "is_configured": true, 00:11:19.858 "data_offset": 2048, 00:11:19.858 "data_size": 63488 00:11:19.858 }, 00:11:19.858 { 00:11:19.858 "name": null, 00:11:19.858 "uuid": "532f1c65-86d4-4601-abf9-b25ff41b4f62", 00:11:19.858 "is_configured": false, 00:11:19.858 "data_offset": 0, 00:11:19.858 "data_size": 63488 00:11:19.858 }, 00:11:19.858 { 00:11:19.858 "name": null, 00:11:19.858 "uuid": "ac0bd531-a3c9-4f32-9f87-64c03d622ecb", 00:11:19.858 "is_configured": false, 00:11:19.858 "data_offset": 0, 00:11:19.858 "data_size": 63488 00:11:19.858 }, 00:11:19.858 { 00:11:19.858 "name": "BaseBdev4", 00:11:19.859 "uuid": "d5921e98-c9b1-4e5b-a49c-286e8f55503b", 00:11:19.859 "is_configured": true, 00:11:19.859 "data_offset": 2048, 00:11:19.859 "data_size": 63488 00:11:19.859 } 00:11:19.859 ] 00:11:19.859 }' 00:11:19.859 20:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:19.859 20:06:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.427 20:06:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:20.427 20:06:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.427 20:06:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.427 20:06:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.427 20:06:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.427 20:06:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:20.427 20:06:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:20.427 20:06:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.427 20:06:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.427 [2024-12-08 20:06:52.205008] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:20.427 20:06:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.427 20:06:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:20.427 20:06:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:20.427 20:06:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:20.427 20:06:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:20.427 20:06:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:20.427 20:06:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:20.427 20:06:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:20.427 20:06:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:20.427 20:06:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:20.427 20:06:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:20.427 20:06:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.427 20:06:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.427 20:06:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.427 20:06:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:20.427 20:06:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.427 20:06:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:20.427 "name": "Existed_Raid", 00:11:20.427 "uuid": "09b944af-0556-49f5-a41d-e8e87477053d", 00:11:20.427 "strip_size_kb": 0, 00:11:20.427 "state": "configuring", 00:11:20.427 "raid_level": "raid1", 00:11:20.427 "superblock": true, 00:11:20.427 "num_base_bdevs": 4, 00:11:20.427 "num_base_bdevs_discovered": 3, 00:11:20.427 "num_base_bdevs_operational": 4, 00:11:20.427 "base_bdevs_list": [ 00:11:20.427 { 00:11:20.427 "name": "BaseBdev1", 00:11:20.427 "uuid": "368adc48-54fc-49eb-a7f4-d56b1c79b083", 00:11:20.427 "is_configured": true, 00:11:20.427 "data_offset": 2048, 00:11:20.427 "data_size": 63488 00:11:20.427 }, 00:11:20.427 { 00:11:20.427 "name": null, 00:11:20.427 "uuid": "532f1c65-86d4-4601-abf9-b25ff41b4f62", 00:11:20.427 "is_configured": false, 00:11:20.427 "data_offset": 0, 00:11:20.427 "data_size": 63488 00:11:20.427 }, 00:11:20.427 { 00:11:20.427 "name": "BaseBdev3", 00:11:20.427 "uuid": "ac0bd531-a3c9-4f32-9f87-64c03d622ecb", 00:11:20.427 "is_configured": true, 00:11:20.427 "data_offset": 2048, 00:11:20.427 "data_size": 63488 00:11:20.427 }, 00:11:20.427 { 00:11:20.427 "name": "BaseBdev4", 00:11:20.427 "uuid": "d5921e98-c9b1-4e5b-a49c-286e8f55503b", 00:11:20.427 "is_configured": true, 00:11:20.427 "data_offset": 2048, 00:11:20.427 "data_size": 63488 00:11:20.427 } 00:11:20.427 ] 00:11:20.427 }' 00:11:20.427 20:06:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:20.427 20:06:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.687 20:06:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.687 20:06:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.687 20:06:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.687 20:06:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:20.687 20:06:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.687 20:06:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:20.687 20:06:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:20.687 20:06:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.687 20:06:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.687 [2024-12-08 20:06:52.632309] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:20.946 20:06:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.946 20:06:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:20.946 20:06:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:20.946 20:06:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:20.946 20:06:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:20.946 20:06:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:20.946 20:06:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:20.946 20:06:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:20.946 20:06:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:20.946 20:06:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:20.946 20:06:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:20.946 20:06:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.946 20:06:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:20.946 20:06:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.946 20:06:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.946 20:06:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.946 20:06:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:20.946 "name": "Existed_Raid", 00:11:20.946 "uuid": "09b944af-0556-49f5-a41d-e8e87477053d", 00:11:20.946 "strip_size_kb": 0, 00:11:20.946 "state": "configuring", 00:11:20.946 "raid_level": "raid1", 00:11:20.946 "superblock": true, 00:11:20.946 "num_base_bdevs": 4, 00:11:20.946 "num_base_bdevs_discovered": 2, 00:11:20.946 "num_base_bdevs_operational": 4, 00:11:20.946 "base_bdevs_list": [ 00:11:20.946 { 00:11:20.946 "name": null, 00:11:20.946 "uuid": "368adc48-54fc-49eb-a7f4-d56b1c79b083", 00:11:20.946 "is_configured": false, 00:11:20.946 "data_offset": 0, 00:11:20.946 "data_size": 63488 00:11:20.946 }, 00:11:20.946 { 00:11:20.946 "name": null, 00:11:20.946 "uuid": "532f1c65-86d4-4601-abf9-b25ff41b4f62", 00:11:20.946 "is_configured": false, 00:11:20.946 "data_offset": 0, 00:11:20.946 "data_size": 63488 00:11:20.946 }, 00:11:20.946 { 00:11:20.946 "name": "BaseBdev3", 00:11:20.946 "uuid": "ac0bd531-a3c9-4f32-9f87-64c03d622ecb", 00:11:20.946 "is_configured": true, 00:11:20.946 "data_offset": 2048, 00:11:20.946 "data_size": 63488 00:11:20.946 }, 00:11:20.946 { 00:11:20.946 "name": "BaseBdev4", 00:11:20.946 "uuid": "d5921e98-c9b1-4e5b-a49c-286e8f55503b", 00:11:20.946 "is_configured": true, 00:11:20.946 "data_offset": 2048, 00:11:20.946 "data_size": 63488 00:11:20.946 } 00:11:20.946 ] 00:11:20.946 }' 00:11:20.946 20:06:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:20.946 20:06:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.206 20:06:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:21.206 20:06:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:21.206 20:06:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.206 20:06:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.206 20:06:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.206 20:06:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:21.206 20:06:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:21.206 20:06:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.206 20:06:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.206 [2024-12-08 20:06:53.174991] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:21.206 20:06:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.206 20:06:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:21.206 20:06:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:21.206 20:06:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:21.206 20:06:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:21.206 20:06:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:21.206 20:06:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:21.206 20:06:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:21.206 20:06:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:21.206 20:06:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:21.206 20:06:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:21.467 20:06:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:21.467 20:06:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:21.467 20:06:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.467 20:06:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.467 20:06:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.467 20:06:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:21.467 "name": "Existed_Raid", 00:11:21.467 "uuid": "09b944af-0556-49f5-a41d-e8e87477053d", 00:11:21.467 "strip_size_kb": 0, 00:11:21.467 "state": "configuring", 00:11:21.467 "raid_level": "raid1", 00:11:21.467 "superblock": true, 00:11:21.467 "num_base_bdevs": 4, 00:11:21.467 "num_base_bdevs_discovered": 3, 00:11:21.467 "num_base_bdevs_operational": 4, 00:11:21.467 "base_bdevs_list": [ 00:11:21.467 { 00:11:21.467 "name": null, 00:11:21.467 "uuid": "368adc48-54fc-49eb-a7f4-d56b1c79b083", 00:11:21.467 "is_configured": false, 00:11:21.467 "data_offset": 0, 00:11:21.467 "data_size": 63488 00:11:21.467 }, 00:11:21.467 { 00:11:21.467 "name": "BaseBdev2", 00:11:21.467 "uuid": "532f1c65-86d4-4601-abf9-b25ff41b4f62", 00:11:21.467 "is_configured": true, 00:11:21.467 "data_offset": 2048, 00:11:21.467 "data_size": 63488 00:11:21.467 }, 00:11:21.467 { 00:11:21.467 "name": "BaseBdev3", 00:11:21.467 "uuid": "ac0bd531-a3c9-4f32-9f87-64c03d622ecb", 00:11:21.467 "is_configured": true, 00:11:21.467 "data_offset": 2048, 00:11:21.467 "data_size": 63488 00:11:21.467 }, 00:11:21.467 { 00:11:21.467 "name": "BaseBdev4", 00:11:21.467 "uuid": "d5921e98-c9b1-4e5b-a49c-286e8f55503b", 00:11:21.467 "is_configured": true, 00:11:21.467 "data_offset": 2048, 00:11:21.467 "data_size": 63488 00:11:21.467 } 00:11:21.467 ] 00:11:21.467 }' 00:11:21.467 20:06:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:21.467 20:06:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.726 20:06:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:21.727 20:06:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.727 20:06:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.727 20:06:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:21.727 20:06:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.727 20:06:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:21.727 20:06:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:21.727 20:06:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:21.727 20:06:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.727 20:06:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.727 20:06:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.986 20:06:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 368adc48-54fc-49eb-a7f4-d56b1c79b083 00:11:21.986 20:06:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.986 20:06:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.986 [2024-12-08 20:06:53.752614] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:21.986 [2024-12-08 20:06:53.752976] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:21.986 [2024-12-08 20:06:53.753031] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:21.986 [2024-12-08 20:06:53.753335] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:21.986 NewBaseBdev 00:11:21.986 [2024-12-08 20:06:53.753559] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:21.986 [2024-12-08 20:06:53.753607] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:21.986 20:06:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.986 [2024-12-08 20:06:53.753811] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:21.986 20:06:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:21.986 20:06:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:11:21.986 20:06:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:21.987 20:06:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:21.987 20:06:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:21.987 20:06:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:21.987 20:06:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:21.987 20:06:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.987 20:06:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.987 20:06:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.987 20:06:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:21.987 20:06:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.987 20:06:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.987 [ 00:11:21.987 { 00:11:21.987 "name": "NewBaseBdev", 00:11:21.987 "aliases": [ 00:11:21.987 "368adc48-54fc-49eb-a7f4-d56b1c79b083" 00:11:21.987 ], 00:11:21.987 "product_name": "Malloc disk", 00:11:21.987 "block_size": 512, 00:11:21.987 "num_blocks": 65536, 00:11:21.987 "uuid": "368adc48-54fc-49eb-a7f4-d56b1c79b083", 00:11:21.987 "assigned_rate_limits": { 00:11:21.987 "rw_ios_per_sec": 0, 00:11:21.987 "rw_mbytes_per_sec": 0, 00:11:21.987 "r_mbytes_per_sec": 0, 00:11:21.987 "w_mbytes_per_sec": 0 00:11:21.987 }, 00:11:21.987 "claimed": true, 00:11:21.987 "claim_type": "exclusive_write", 00:11:21.987 "zoned": false, 00:11:21.987 "supported_io_types": { 00:11:21.987 "read": true, 00:11:21.987 "write": true, 00:11:21.987 "unmap": true, 00:11:21.987 "flush": true, 00:11:21.987 "reset": true, 00:11:21.987 "nvme_admin": false, 00:11:21.987 "nvme_io": false, 00:11:21.987 "nvme_io_md": false, 00:11:21.987 "write_zeroes": true, 00:11:21.987 "zcopy": true, 00:11:21.987 "get_zone_info": false, 00:11:21.987 "zone_management": false, 00:11:21.987 "zone_append": false, 00:11:21.987 "compare": false, 00:11:21.987 "compare_and_write": false, 00:11:21.987 "abort": true, 00:11:21.987 "seek_hole": false, 00:11:21.987 "seek_data": false, 00:11:21.987 "copy": true, 00:11:21.987 "nvme_iov_md": false 00:11:21.987 }, 00:11:21.987 "memory_domains": [ 00:11:21.987 { 00:11:21.987 "dma_device_id": "system", 00:11:21.987 "dma_device_type": 1 00:11:21.987 }, 00:11:21.987 { 00:11:21.987 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:21.987 "dma_device_type": 2 00:11:21.987 } 00:11:21.987 ], 00:11:21.987 "driver_specific": {} 00:11:21.987 } 00:11:21.987 ] 00:11:21.987 20:06:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.987 20:06:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:21.987 20:06:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:11:21.987 20:06:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:21.987 20:06:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:21.987 20:06:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:21.987 20:06:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:21.987 20:06:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:21.987 20:06:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:21.987 20:06:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:21.987 20:06:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:21.987 20:06:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:21.987 20:06:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:21.987 20:06:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:21.987 20:06:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.987 20:06:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.987 20:06:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.987 20:06:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:21.987 "name": "Existed_Raid", 00:11:21.987 "uuid": "09b944af-0556-49f5-a41d-e8e87477053d", 00:11:21.987 "strip_size_kb": 0, 00:11:21.987 "state": "online", 00:11:21.987 "raid_level": "raid1", 00:11:21.987 "superblock": true, 00:11:21.987 "num_base_bdevs": 4, 00:11:21.987 "num_base_bdevs_discovered": 4, 00:11:21.987 "num_base_bdevs_operational": 4, 00:11:21.987 "base_bdevs_list": [ 00:11:21.987 { 00:11:21.987 "name": "NewBaseBdev", 00:11:21.987 "uuid": "368adc48-54fc-49eb-a7f4-d56b1c79b083", 00:11:21.987 "is_configured": true, 00:11:21.987 "data_offset": 2048, 00:11:21.987 "data_size": 63488 00:11:21.987 }, 00:11:21.987 { 00:11:21.987 "name": "BaseBdev2", 00:11:21.987 "uuid": "532f1c65-86d4-4601-abf9-b25ff41b4f62", 00:11:21.987 "is_configured": true, 00:11:21.987 "data_offset": 2048, 00:11:21.987 "data_size": 63488 00:11:21.987 }, 00:11:21.987 { 00:11:21.987 "name": "BaseBdev3", 00:11:21.987 "uuid": "ac0bd531-a3c9-4f32-9f87-64c03d622ecb", 00:11:21.987 "is_configured": true, 00:11:21.987 "data_offset": 2048, 00:11:21.987 "data_size": 63488 00:11:21.987 }, 00:11:21.987 { 00:11:21.987 "name": "BaseBdev4", 00:11:21.987 "uuid": "d5921e98-c9b1-4e5b-a49c-286e8f55503b", 00:11:21.987 "is_configured": true, 00:11:21.987 "data_offset": 2048, 00:11:21.987 "data_size": 63488 00:11:21.987 } 00:11:21.987 ] 00:11:21.987 }' 00:11:21.987 20:06:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:21.987 20:06:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.247 20:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:22.247 20:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:22.247 20:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:22.247 20:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:22.247 20:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:22.247 20:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:22.247 20:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:22.247 20:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:22.247 20:06:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.247 20:06:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.507 [2024-12-08 20:06:54.224183] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:22.507 20:06:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.507 20:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:22.507 "name": "Existed_Raid", 00:11:22.507 "aliases": [ 00:11:22.507 "09b944af-0556-49f5-a41d-e8e87477053d" 00:11:22.507 ], 00:11:22.507 "product_name": "Raid Volume", 00:11:22.507 "block_size": 512, 00:11:22.507 "num_blocks": 63488, 00:11:22.507 "uuid": "09b944af-0556-49f5-a41d-e8e87477053d", 00:11:22.507 "assigned_rate_limits": { 00:11:22.507 "rw_ios_per_sec": 0, 00:11:22.507 "rw_mbytes_per_sec": 0, 00:11:22.507 "r_mbytes_per_sec": 0, 00:11:22.507 "w_mbytes_per_sec": 0 00:11:22.507 }, 00:11:22.507 "claimed": false, 00:11:22.507 "zoned": false, 00:11:22.507 "supported_io_types": { 00:11:22.507 "read": true, 00:11:22.507 "write": true, 00:11:22.507 "unmap": false, 00:11:22.507 "flush": false, 00:11:22.507 "reset": true, 00:11:22.507 "nvme_admin": false, 00:11:22.507 "nvme_io": false, 00:11:22.507 "nvme_io_md": false, 00:11:22.507 "write_zeroes": true, 00:11:22.507 "zcopy": false, 00:11:22.507 "get_zone_info": false, 00:11:22.507 "zone_management": false, 00:11:22.507 "zone_append": false, 00:11:22.507 "compare": false, 00:11:22.507 "compare_and_write": false, 00:11:22.507 "abort": false, 00:11:22.507 "seek_hole": false, 00:11:22.507 "seek_data": false, 00:11:22.507 "copy": false, 00:11:22.507 "nvme_iov_md": false 00:11:22.507 }, 00:11:22.507 "memory_domains": [ 00:11:22.507 { 00:11:22.507 "dma_device_id": "system", 00:11:22.507 "dma_device_type": 1 00:11:22.507 }, 00:11:22.507 { 00:11:22.507 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:22.507 "dma_device_type": 2 00:11:22.507 }, 00:11:22.507 { 00:11:22.507 "dma_device_id": "system", 00:11:22.507 "dma_device_type": 1 00:11:22.507 }, 00:11:22.507 { 00:11:22.507 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:22.507 "dma_device_type": 2 00:11:22.507 }, 00:11:22.507 { 00:11:22.507 "dma_device_id": "system", 00:11:22.507 "dma_device_type": 1 00:11:22.507 }, 00:11:22.507 { 00:11:22.507 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:22.507 "dma_device_type": 2 00:11:22.507 }, 00:11:22.507 { 00:11:22.507 "dma_device_id": "system", 00:11:22.507 "dma_device_type": 1 00:11:22.507 }, 00:11:22.507 { 00:11:22.507 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:22.507 "dma_device_type": 2 00:11:22.507 } 00:11:22.507 ], 00:11:22.507 "driver_specific": { 00:11:22.507 "raid": { 00:11:22.507 "uuid": "09b944af-0556-49f5-a41d-e8e87477053d", 00:11:22.507 "strip_size_kb": 0, 00:11:22.507 "state": "online", 00:11:22.507 "raid_level": "raid1", 00:11:22.507 "superblock": true, 00:11:22.507 "num_base_bdevs": 4, 00:11:22.507 "num_base_bdevs_discovered": 4, 00:11:22.507 "num_base_bdevs_operational": 4, 00:11:22.507 "base_bdevs_list": [ 00:11:22.507 { 00:11:22.507 "name": "NewBaseBdev", 00:11:22.507 "uuid": "368adc48-54fc-49eb-a7f4-d56b1c79b083", 00:11:22.507 "is_configured": true, 00:11:22.507 "data_offset": 2048, 00:11:22.507 "data_size": 63488 00:11:22.507 }, 00:11:22.507 { 00:11:22.507 "name": "BaseBdev2", 00:11:22.507 "uuid": "532f1c65-86d4-4601-abf9-b25ff41b4f62", 00:11:22.507 "is_configured": true, 00:11:22.507 "data_offset": 2048, 00:11:22.507 "data_size": 63488 00:11:22.507 }, 00:11:22.507 { 00:11:22.507 "name": "BaseBdev3", 00:11:22.507 "uuid": "ac0bd531-a3c9-4f32-9f87-64c03d622ecb", 00:11:22.507 "is_configured": true, 00:11:22.507 "data_offset": 2048, 00:11:22.507 "data_size": 63488 00:11:22.507 }, 00:11:22.507 { 00:11:22.507 "name": "BaseBdev4", 00:11:22.507 "uuid": "d5921e98-c9b1-4e5b-a49c-286e8f55503b", 00:11:22.507 "is_configured": true, 00:11:22.507 "data_offset": 2048, 00:11:22.507 "data_size": 63488 00:11:22.507 } 00:11:22.507 ] 00:11:22.507 } 00:11:22.507 } 00:11:22.507 }' 00:11:22.507 20:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:22.507 20:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:22.507 BaseBdev2 00:11:22.507 BaseBdev3 00:11:22.507 BaseBdev4' 00:11:22.507 20:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:22.507 20:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:22.507 20:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:22.507 20:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:22.507 20:06:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.507 20:06:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.507 20:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:22.507 20:06:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.508 20:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:22.508 20:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:22.508 20:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:22.508 20:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:22.508 20:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:22.508 20:06:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.508 20:06:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.508 20:06:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.508 20:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:22.508 20:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:22.508 20:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:22.508 20:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:22.508 20:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:22.508 20:06:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.508 20:06:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.508 20:06:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.768 20:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:22.768 20:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:22.768 20:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:22.768 20:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:22.768 20:06:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.768 20:06:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.768 20:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:22.768 20:06:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.768 20:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:22.768 20:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:22.768 20:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:22.768 20:06:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.768 20:06:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.768 [2024-12-08 20:06:54.555280] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:22.768 [2024-12-08 20:06:54.555348] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:22.768 [2024-12-08 20:06:54.555429] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:22.768 [2024-12-08 20:06:54.555739] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:22.768 [2024-12-08 20:06:54.555755] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:22.768 20:06:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.768 20:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 73642 00:11:22.768 20:06:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 73642 ']' 00:11:22.768 20:06:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 73642 00:11:22.768 20:06:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:11:22.768 20:06:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:22.768 20:06:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73642 00:11:22.768 killing process with pid 73642 00:11:22.768 20:06:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:22.768 20:06:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:22.768 20:06:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73642' 00:11:22.768 20:06:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 73642 00:11:22.768 20:06:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 73642 00:11:22.768 [2024-12-08 20:06:54.598254] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:23.028 [2024-12-08 20:06:54.993432] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:24.414 20:06:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:11:24.414 00:11:24.414 real 0m11.139s 00:11:24.414 user 0m17.682s 00:11:24.414 sys 0m1.967s 00:11:24.414 ************************************ 00:11:24.414 END TEST raid_state_function_test_sb 00:11:24.414 ************************************ 00:11:24.414 20:06:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:24.414 20:06:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.414 20:06:56 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:11:24.414 20:06:56 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:24.414 20:06:56 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:24.414 20:06:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:24.414 ************************************ 00:11:24.414 START TEST raid_superblock_test 00:11:24.414 ************************************ 00:11:24.414 20:06:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 4 00:11:24.414 20:06:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:11:24.414 20:06:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:11:24.414 20:06:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:11:24.414 20:06:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:11:24.414 20:06:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:11:24.414 20:06:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:11:24.414 20:06:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:11:24.414 20:06:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:11:24.414 20:06:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:11:24.414 20:06:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:11:24.414 20:06:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:11:24.414 20:06:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:11:24.414 20:06:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:11:24.414 20:06:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:11:24.414 20:06:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:11:24.414 20:06:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=74307 00:11:24.414 20:06:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 74307 00:11:24.414 20:06:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 74307 ']' 00:11:24.414 20:06:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:24.414 20:06:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:24.414 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:24.414 20:06:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:24.414 20:06:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:24.414 20:06:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.414 20:06:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:11:24.414 [2024-12-08 20:06:56.246589] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:11:24.415 [2024-12-08 20:06:56.246783] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74307 ] 00:11:24.674 [2024-12-08 20:06:56.419932] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:24.674 [2024-12-08 20:06:56.529452] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:24.934 [2024-12-08 20:06:56.723068] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:24.934 [2024-12-08 20:06:56.723218] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:25.194 20:06:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:25.194 20:06:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:11:25.194 20:06:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:11:25.194 20:06:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:25.194 20:06:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:11:25.194 20:06:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:11:25.194 20:06:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:11:25.194 20:06:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:25.194 20:06:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:25.194 20:06:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:25.194 20:06:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:11:25.194 20:06:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.194 20:06:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.194 malloc1 00:11:25.194 20:06:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.194 20:06:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:25.194 20:06:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.194 20:06:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.194 [2024-12-08 20:06:57.107013] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:25.194 [2024-12-08 20:06:57.107114] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:25.194 [2024-12-08 20:06:57.107153] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:25.194 [2024-12-08 20:06:57.107203] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:25.194 [2024-12-08 20:06:57.109268] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:25.194 [2024-12-08 20:06:57.109341] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:25.194 pt1 00:11:25.194 20:06:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.194 20:06:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:25.194 20:06:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:25.194 20:06:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:11:25.194 20:06:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:11:25.194 20:06:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:11:25.194 20:06:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:25.194 20:06:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:25.194 20:06:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:25.194 20:06:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:11:25.194 20:06:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.194 20:06:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.194 malloc2 00:11:25.194 20:06:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.194 20:06:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:25.194 20:06:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.194 20:06:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.194 [2024-12-08 20:06:57.153470] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:25.194 [2024-12-08 20:06:57.153525] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:25.194 [2024-12-08 20:06:57.153550] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:25.194 [2024-12-08 20:06:57.153559] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:25.194 [2024-12-08 20:06:57.155679] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:25.194 [2024-12-08 20:06:57.155716] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:25.194 pt2 00:11:25.194 20:06:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.194 20:06:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:25.194 20:06:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:25.194 20:06:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:11:25.194 20:06:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:11:25.194 20:06:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:11:25.194 20:06:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:25.194 20:06:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:25.194 20:06:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:25.194 20:06:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:11:25.194 20:06:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.194 20:06:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.454 malloc3 00:11:25.454 20:06:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.454 20:06:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:25.454 20:06:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.454 20:06:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.454 [2024-12-08 20:06:57.220705] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:25.454 [2024-12-08 20:06:57.220758] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:25.454 [2024-12-08 20:06:57.220778] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:25.454 [2024-12-08 20:06:57.220786] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:25.454 [2024-12-08 20:06:57.222787] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:25.454 pt3 00:11:25.454 [2024-12-08 20:06:57.222880] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:25.454 20:06:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.454 20:06:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:25.454 20:06:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:25.454 20:06:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:11:25.454 20:06:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:11:25.454 20:06:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:11:25.454 20:06:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:25.454 20:06:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:25.454 20:06:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:25.454 20:06:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:11:25.454 20:06:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.454 20:06:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.454 malloc4 00:11:25.454 20:06:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.454 20:06:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:25.454 20:06:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.454 20:06:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.454 [2024-12-08 20:06:57.270061] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:25.454 [2024-12-08 20:06:57.270115] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:25.454 [2024-12-08 20:06:57.270135] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:11:25.454 [2024-12-08 20:06:57.270144] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:25.454 [2024-12-08 20:06:57.272160] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:25.454 [2024-12-08 20:06:57.272197] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:25.454 pt4 00:11:25.454 20:06:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.454 20:06:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:25.455 20:06:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:25.455 20:06:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:11:25.455 20:06:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.455 20:06:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.455 [2024-12-08 20:06:57.282074] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:25.455 [2024-12-08 20:06:57.283830] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:25.455 [2024-12-08 20:06:57.283887] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:25.455 [2024-12-08 20:06:57.283957] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:25.455 [2024-12-08 20:06:57.284134] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:25.455 [2024-12-08 20:06:57.284150] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:25.455 [2024-12-08 20:06:57.284391] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:25.455 [2024-12-08 20:06:57.284557] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:25.455 [2024-12-08 20:06:57.284572] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:11:25.455 [2024-12-08 20:06:57.284718] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:25.455 20:06:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.455 20:06:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:11:25.455 20:06:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:25.455 20:06:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:25.455 20:06:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:25.455 20:06:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:25.455 20:06:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:25.455 20:06:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:25.455 20:06:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:25.455 20:06:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:25.455 20:06:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:25.455 20:06:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.455 20:06:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:25.455 20:06:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.455 20:06:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.455 20:06:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.455 20:06:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:25.455 "name": "raid_bdev1", 00:11:25.455 "uuid": "c4ab73d1-4288-48e3-b499-2b08a1463f1d", 00:11:25.455 "strip_size_kb": 0, 00:11:25.455 "state": "online", 00:11:25.455 "raid_level": "raid1", 00:11:25.455 "superblock": true, 00:11:25.455 "num_base_bdevs": 4, 00:11:25.455 "num_base_bdevs_discovered": 4, 00:11:25.455 "num_base_bdevs_operational": 4, 00:11:25.455 "base_bdevs_list": [ 00:11:25.455 { 00:11:25.455 "name": "pt1", 00:11:25.455 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:25.455 "is_configured": true, 00:11:25.455 "data_offset": 2048, 00:11:25.455 "data_size": 63488 00:11:25.455 }, 00:11:25.455 { 00:11:25.455 "name": "pt2", 00:11:25.455 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:25.455 "is_configured": true, 00:11:25.455 "data_offset": 2048, 00:11:25.455 "data_size": 63488 00:11:25.455 }, 00:11:25.455 { 00:11:25.455 "name": "pt3", 00:11:25.455 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:25.455 "is_configured": true, 00:11:25.455 "data_offset": 2048, 00:11:25.455 "data_size": 63488 00:11:25.455 }, 00:11:25.455 { 00:11:25.455 "name": "pt4", 00:11:25.455 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:25.455 "is_configured": true, 00:11:25.455 "data_offset": 2048, 00:11:25.455 "data_size": 63488 00:11:25.455 } 00:11:25.455 ] 00:11:25.455 }' 00:11:25.455 20:06:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:25.455 20:06:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.023 20:06:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:11:26.024 20:06:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:26.024 20:06:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:26.024 20:06:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:26.024 20:06:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:26.024 20:06:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:26.024 20:06:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:26.024 20:06:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:26.024 20:06:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.024 20:06:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.024 [2024-12-08 20:06:57.701607] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:26.024 20:06:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.024 20:06:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:26.024 "name": "raid_bdev1", 00:11:26.024 "aliases": [ 00:11:26.024 "c4ab73d1-4288-48e3-b499-2b08a1463f1d" 00:11:26.024 ], 00:11:26.024 "product_name": "Raid Volume", 00:11:26.024 "block_size": 512, 00:11:26.024 "num_blocks": 63488, 00:11:26.024 "uuid": "c4ab73d1-4288-48e3-b499-2b08a1463f1d", 00:11:26.024 "assigned_rate_limits": { 00:11:26.024 "rw_ios_per_sec": 0, 00:11:26.024 "rw_mbytes_per_sec": 0, 00:11:26.024 "r_mbytes_per_sec": 0, 00:11:26.024 "w_mbytes_per_sec": 0 00:11:26.024 }, 00:11:26.024 "claimed": false, 00:11:26.024 "zoned": false, 00:11:26.024 "supported_io_types": { 00:11:26.024 "read": true, 00:11:26.024 "write": true, 00:11:26.024 "unmap": false, 00:11:26.024 "flush": false, 00:11:26.024 "reset": true, 00:11:26.024 "nvme_admin": false, 00:11:26.024 "nvme_io": false, 00:11:26.024 "nvme_io_md": false, 00:11:26.024 "write_zeroes": true, 00:11:26.024 "zcopy": false, 00:11:26.024 "get_zone_info": false, 00:11:26.024 "zone_management": false, 00:11:26.024 "zone_append": false, 00:11:26.024 "compare": false, 00:11:26.024 "compare_and_write": false, 00:11:26.024 "abort": false, 00:11:26.024 "seek_hole": false, 00:11:26.024 "seek_data": false, 00:11:26.024 "copy": false, 00:11:26.024 "nvme_iov_md": false 00:11:26.024 }, 00:11:26.024 "memory_domains": [ 00:11:26.024 { 00:11:26.024 "dma_device_id": "system", 00:11:26.024 "dma_device_type": 1 00:11:26.024 }, 00:11:26.024 { 00:11:26.024 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:26.024 "dma_device_type": 2 00:11:26.024 }, 00:11:26.024 { 00:11:26.024 "dma_device_id": "system", 00:11:26.024 "dma_device_type": 1 00:11:26.024 }, 00:11:26.024 { 00:11:26.024 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:26.024 "dma_device_type": 2 00:11:26.024 }, 00:11:26.024 { 00:11:26.024 "dma_device_id": "system", 00:11:26.024 "dma_device_type": 1 00:11:26.024 }, 00:11:26.024 { 00:11:26.024 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:26.024 "dma_device_type": 2 00:11:26.024 }, 00:11:26.024 { 00:11:26.024 "dma_device_id": "system", 00:11:26.024 "dma_device_type": 1 00:11:26.024 }, 00:11:26.024 { 00:11:26.024 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:26.024 "dma_device_type": 2 00:11:26.024 } 00:11:26.024 ], 00:11:26.024 "driver_specific": { 00:11:26.024 "raid": { 00:11:26.024 "uuid": "c4ab73d1-4288-48e3-b499-2b08a1463f1d", 00:11:26.024 "strip_size_kb": 0, 00:11:26.024 "state": "online", 00:11:26.024 "raid_level": "raid1", 00:11:26.024 "superblock": true, 00:11:26.024 "num_base_bdevs": 4, 00:11:26.024 "num_base_bdevs_discovered": 4, 00:11:26.024 "num_base_bdevs_operational": 4, 00:11:26.024 "base_bdevs_list": [ 00:11:26.024 { 00:11:26.024 "name": "pt1", 00:11:26.024 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:26.024 "is_configured": true, 00:11:26.024 "data_offset": 2048, 00:11:26.024 "data_size": 63488 00:11:26.024 }, 00:11:26.024 { 00:11:26.024 "name": "pt2", 00:11:26.024 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:26.024 "is_configured": true, 00:11:26.024 "data_offset": 2048, 00:11:26.024 "data_size": 63488 00:11:26.024 }, 00:11:26.024 { 00:11:26.024 "name": "pt3", 00:11:26.024 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:26.024 "is_configured": true, 00:11:26.024 "data_offset": 2048, 00:11:26.024 "data_size": 63488 00:11:26.024 }, 00:11:26.024 { 00:11:26.024 "name": "pt4", 00:11:26.024 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:26.024 "is_configured": true, 00:11:26.024 "data_offset": 2048, 00:11:26.024 "data_size": 63488 00:11:26.024 } 00:11:26.024 ] 00:11:26.024 } 00:11:26.024 } 00:11:26.024 }' 00:11:26.024 20:06:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:26.024 20:06:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:26.024 pt2 00:11:26.024 pt3 00:11:26.024 pt4' 00:11:26.024 20:06:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:26.024 20:06:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:26.024 20:06:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:26.024 20:06:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:26.024 20:06:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:26.024 20:06:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.024 20:06:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.024 20:06:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.024 20:06:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:26.024 20:06:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:26.024 20:06:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:26.024 20:06:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:26.024 20:06:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.024 20:06:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.024 20:06:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:26.024 20:06:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.024 20:06:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:26.024 20:06:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:26.024 20:06:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:26.024 20:06:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:26.024 20:06:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:26.024 20:06:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.024 20:06:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.024 20:06:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.024 20:06:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:26.024 20:06:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:26.024 20:06:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:26.024 20:06:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:26.024 20:06:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.024 20:06:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:26.024 20:06:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.024 20:06:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.024 20:06:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:26.024 20:06:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:26.024 20:06:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:11:26.024 20:06:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:26.024 20:06:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.024 20:06:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.024 [2024-12-08 20:06:57.977130] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:26.285 20:06:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.285 20:06:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=c4ab73d1-4288-48e3-b499-2b08a1463f1d 00:11:26.285 20:06:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z c4ab73d1-4288-48e3-b499-2b08a1463f1d ']' 00:11:26.285 20:06:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:26.285 20:06:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.285 20:06:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.285 [2024-12-08 20:06:58.004805] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:26.285 [2024-12-08 20:06:58.004875] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:26.285 [2024-12-08 20:06:58.005024] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:26.285 [2024-12-08 20:06:58.005166] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:26.285 [2024-12-08 20:06:58.005231] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:11:26.285 20:06:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.285 20:06:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.285 20:06:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:11:26.285 20:06:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.285 20:06:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.285 20:06:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.285 20:06:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:11:26.285 20:06:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:11:26.285 20:06:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:26.285 20:06:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:11:26.285 20:06:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.285 20:06:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.285 20:06:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.285 20:06:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:26.285 20:06:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:11:26.285 20:06:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.285 20:06:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.285 20:06:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.285 20:06:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:26.285 20:06:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:11:26.285 20:06:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.285 20:06:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.285 20:06:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.285 20:06:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:26.285 20:06:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:11:26.285 20:06:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.285 20:06:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.285 20:06:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.285 20:06:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:11:26.285 20:06:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.285 20:06:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.285 20:06:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:11:26.285 20:06:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.285 20:06:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:11:26.285 20:06:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:26.285 20:06:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:11:26.285 20:06:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:26.285 20:06:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:11:26.285 20:06:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:26.285 20:06:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:11:26.285 20:06:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:26.285 20:06:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:26.285 20:06:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.285 20:06:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.285 [2024-12-08 20:06:58.152561] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:26.285 [2024-12-08 20:06:58.154479] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:26.285 [2024-12-08 20:06:58.154527] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:11:26.285 [2024-12-08 20:06:58.154561] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:11:26.285 [2024-12-08 20:06:58.154610] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:11:26.285 [2024-12-08 20:06:58.154659] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:11:26.285 [2024-12-08 20:06:58.154678] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:11:26.285 [2024-12-08 20:06:58.154696] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:11:26.285 [2024-12-08 20:06:58.154708] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:26.286 [2024-12-08 20:06:58.154719] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:11:26.286 request: 00:11:26.286 { 00:11:26.286 "name": "raid_bdev1", 00:11:26.286 "raid_level": "raid1", 00:11:26.286 "base_bdevs": [ 00:11:26.286 "malloc1", 00:11:26.286 "malloc2", 00:11:26.286 "malloc3", 00:11:26.286 "malloc4" 00:11:26.286 ], 00:11:26.286 "superblock": false, 00:11:26.286 "method": "bdev_raid_create", 00:11:26.286 "req_id": 1 00:11:26.286 } 00:11:26.286 Got JSON-RPC error response 00:11:26.286 response: 00:11:26.286 { 00:11:26.286 "code": -17, 00:11:26.286 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:11:26.286 } 00:11:26.286 20:06:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:11:26.286 20:06:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:11:26.286 20:06:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:26.286 20:06:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:26.286 20:06:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:26.286 20:06:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.286 20:06:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:11:26.286 20:06:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.286 20:06:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.286 20:06:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.286 20:06:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:11:26.286 20:06:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:11:26.286 20:06:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:26.286 20:06:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.286 20:06:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.286 [2024-12-08 20:06:58.208468] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:26.286 [2024-12-08 20:06:58.208568] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:26.286 [2024-12-08 20:06:58.208589] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:26.286 [2024-12-08 20:06:58.208601] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:26.286 [2024-12-08 20:06:58.210719] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:26.286 [2024-12-08 20:06:58.210756] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:26.286 [2024-12-08 20:06:58.210832] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:26.286 [2024-12-08 20:06:58.210889] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:26.286 pt1 00:11:26.286 20:06:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.286 20:06:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:11:26.286 20:06:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:26.286 20:06:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:26.286 20:06:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:26.286 20:06:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:26.286 20:06:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:26.286 20:06:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:26.286 20:06:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:26.286 20:06:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:26.286 20:06:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:26.286 20:06:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.286 20:06:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:26.286 20:06:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.286 20:06:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.286 20:06:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.546 20:06:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:26.546 "name": "raid_bdev1", 00:11:26.546 "uuid": "c4ab73d1-4288-48e3-b499-2b08a1463f1d", 00:11:26.546 "strip_size_kb": 0, 00:11:26.546 "state": "configuring", 00:11:26.546 "raid_level": "raid1", 00:11:26.546 "superblock": true, 00:11:26.546 "num_base_bdevs": 4, 00:11:26.546 "num_base_bdevs_discovered": 1, 00:11:26.546 "num_base_bdevs_operational": 4, 00:11:26.546 "base_bdevs_list": [ 00:11:26.546 { 00:11:26.546 "name": "pt1", 00:11:26.546 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:26.546 "is_configured": true, 00:11:26.546 "data_offset": 2048, 00:11:26.546 "data_size": 63488 00:11:26.546 }, 00:11:26.546 { 00:11:26.546 "name": null, 00:11:26.546 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:26.546 "is_configured": false, 00:11:26.546 "data_offset": 2048, 00:11:26.546 "data_size": 63488 00:11:26.546 }, 00:11:26.546 { 00:11:26.546 "name": null, 00:11:26.546 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:26.546 "is_configured": false, 00:11:26.546 "data_offset": 2048, 00:11:26.546 "data_size": 63488 00:11:26.546 }, 00:11:26.546 { 00:11:26.546 "name": null, 00:11:26.546 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:26.546 "is_configured": false, 00:11:26.546 "data_offset": 2048, 00:11:26.546 "data_size": 63488 00:11:26.546 } 00:11:26.546 ] 00:11:26.546 }' 00:11:26.546 20:06:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:26.546 20:06:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.865 20:06:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:11:26.865 20:06:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:26.865 20:06:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.865 20:06:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.865 [2024-12-08 20:06:58.603836] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:26.865 [2024-12-08 20:06:58.603912] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:26.865 [2024-12-08 20:06:58.603935] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:11:26.865 [2024-12-08 20:06:58.603956] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:26.865 [2024-12-08 20:06:58.604406] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:26.865 [2024-12-08 20:06:58.604444] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:26.865 [2024-12-08 20:06:58.604529] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:26.865 [2024-12-08 20:06:58.604560] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:26.865 pt2 00:11:26.865 20:06:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.865 20:06:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:11:26.865 20:06:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.865 20:06:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.865 [2024-12-08 20:06:58.611810] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:11:26.865 20:06:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.865 20:06:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:11:26.865 20:06:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:26.865 20:06:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:26.865 20:06:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:26.865 20:06:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:26.865 20:06:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:26.865 20:06:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:26.865 20:06:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:26.865 20:06:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:26.865 20:06:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:26.865 20:06:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.865 20:06:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:26.865 20:06:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.865 20:06:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.865 20:06:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.865 20:06:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:26.865 "name": "raid_bdev1", 00:11:26.865 "uuid": "c4ab73d1-4288-48e3-b499-2b08a1463f1d", 00:11:26.865 "strip_size_kb": 0, 00:11:26.865 "state": "configuring", 00:11:26.865 "raid_level": "raid1", 00:11:26.865 "superblock": true, 00:11:26.865 "num_base_bdevs": 4, 00:11:26.865 "num_base_bdevs_discovered": 1, 00:11:26.865 "num_base_bdevs_operational": 4, 00:11:26.865 "base_bdevs_list": [ 00:11:26.865 { 00:11:26.865 "name": "pt1", 00:11:26.865 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:26.865 "is_configured": true, 00:11:26.865 "data_offset": 2048, 00:11:26.865 "data_size": 63488 00:11:26.865 }, 00:11:26.865 { 00:11:26.865 "name": null, 00:11:26.865 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:26.865 "is_configured": false, 00:11:26.865 "data_offset": 0, 00:11:26.865 "data_size": 63488 00:11:26.865 }, 00:11:26.865 { 00:11:26.865 "name": null, 00:11:26.865 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:26.865 "is_configured": false, 00:11:26.865 "data_offset": 2048, 00:11:26.865 "data_size": 63488 00:11:26.865 }, 00:11:26.865 { 00:11:26.865 "name": null, 00:11:26.865 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:26.865 "is_configured": false, 00:11:26.865 "data_offset": 2048, 00:11:26.865 "data_size": 63488 00:11:26.865 } 00:11:26.865 ] 00:11:26.865 }' 00:11:26.865 20:06:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:26.865 20:06:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.152 20:06:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:11:27.152 20:06:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:27.152 20:06:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:27.152 20:06:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.152 20:06:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.152 [2024-12-08 20:06:59.079082] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:27.152 [2024-12-08 20:06:59.079207] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:27.152 [2024-12-08 20:06:59.079291] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:11:27.152 [2024-12-08 20:06:59.079337] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:27.152 [2024-12-08 20:06:59.079845] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:27.152 [2024-12-08 20:06:59.079911] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:27.152 [2024-12-08 20:06:59.080063] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:27.152 [2024-12-08 20:06:59.080119] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:27.152 pt2 00:11:27.152 20:06:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.152 20:06:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:27.152 20:06:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:27.152 20:06:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:27.152 20:06:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.152 20:06:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.152 [2024-12-08 20:06:59.091049] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:27.152 [2024-12-08 20:06:59.091138] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:27.152 [2024-12-08 20:06:59.091183] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:11:27.152 [2024-12-08 20:06:59.091258] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:27.152 [2024-12-08 20:06:59.091764] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:27.152 [2024-12-08 20:06:59.091836] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:27.152 [2024-12-08 20:06:59.091973] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:27.152 [2024-12-08 20:06:59.092031] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:27.152 pt3 00:11:27.152 20:06:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.152 20:06:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:27.152 20:06:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:27.152 20:06:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:27.152 20:06:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.152 20:06:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.152 [2024-12-08 20:06:59.102997] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:27.152 [2024-12-08 20:06:59.103071] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:27.152 [2024-12-08 20:06:59.103119] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:11:27.152 [2024-12-08 20:06:59.103157] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:27.152 [2024-12-08 20:06:59.103600] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:27.152 [2024-12-08 20:06:59.103666] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:27.152 [2024-12-08 20:06:59.103781] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:11:27.152 [2024-12-08 20:06:59.103813] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:27.152 [2024-12-08 20:06:59.103991] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:27.152 [2024-12-08 20:06:59.104001] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:27.152 [2024-12-08 20:06:59.104233] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:11:27.152 [2024-12-08 20:06:59.104388] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:27.152 [2024-12-08 20:06:59.104401] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:11:27.152 [2024-12-08 20:06:59.104552] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:27.152 pt4 00:11:27.152 20:06:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.152 20:06:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:27.152 20:06:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:27.152 20:06:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:11:27.152 20:06:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:27.152 20:06:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:27.152 20:06:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:27.152 20:06:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:27.152 20:06:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:27.152 20:06:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:27.152 20:06:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:27.152 20:06:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:27.152 20:06:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:27.152 20:06:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.152 20:06:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.152 20:06:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:27.152 20:06:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.412 20:06:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.412 20:06:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:27.412 "name": "raid_bdev1", 00:11:27.412 "uuid": "c4ab73d1-4288-48e3-b499-2b08a1463f1d", 00:11:27.412 "strip_size_kb": 0, 00:11:27.412 "state": "online", 00:11:27.412 "raid_level": "raid1", 00:11:27.412 "superblock": true, 00:11:27.412 "num_base_bdevs": 4, 00:11:27.412 "num_base_bdevs_discovered": 4, 00:11:27.412 "num_base_bdevs_operational": 4, 00:11:27.412 "base_bdevs_list": [ 00:11:27.412 { 00:11:27.412 "name": "pt1", 00:11:27.412 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:27.412 "is_configured": true, 00:11:27.412 "data_offset": 2048, 00:11:27.412 "data_size": 63488 00:11:27.412 }, 00:11:27.412 { 00:11:27.412 "name": "pt2", 00:11:27.412 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:27.412 "is_configured": true, 00:11:27.412 "data_offset": 2048, 00:11:27.412 "data_size": 63488 00:11:27.412 }, 00:11:27.412 { 00:11:27.412 "name": "pt3", 00:11:27.412 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:27.412 "is_configured": true, 00:11:27.412 "data_offset": 2048, 00:11:27.412 "data_size": 63488 00:11:27.412 }, 00:11:27.412 { 00:11:27.412 "name": "pt4", 00:11:27.412 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:27.412 "is_configured": true, 00:11:27.412 "data_offset": 2048, 00:11:27.412 "data_size": 63488 00:11:27.412 } 00:11:27.412 ] 00:11:27.412 }' 00:11:27.412 20:06:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:27.412 20:06:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.671 20:06:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:11:27.671 20:06:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:27.671 20:06:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:27.671 20:06:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:27.671 20:06:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:27.671 20:06:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:27.671 20:06:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:27.671 20:06:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:27.671 20:06:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.671 20:06:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.671 [2024-12-08 20:06:59.522647] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:27.671 20:06:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.671 20:06:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:27.671 "name": "raid_bdev1", 00:11:27.671 "aliases": [ 00:11:27.671 "c4ab73d1-4288-48e3-b499-2b08a1463f1d" 00:11:27.672 ], 00:11:27.672 "product_name": "Raid Volume", 00:11:27.672 "block_size": 512, 00:11:27.672 "num_blocks": 63488, 00:11:27.672 "uuid": "c4ab73d1-4288-48e3-b499-2b08a1463f1d", 00:11:27.672 "assigned_rate_limits": { 00:11:27.672 "rw_ios_per_sec": 0, 00:11:27.672 "rw_mbytes_per_sec": 0, 00:11:27.672 "r_mbytes_per_sec": 0, 00:11:27.672 "w_mbytes_per_sec": 0 00:11:27.672 }, 00:11:27.672 "claimed": false, 00:11:27.672 "zoned": false, 00:11:27.672 "supported_io_types": { 00:11:27.672 "read": true, 00:11:27.672 "write": true, 00:11:27.672 "unmap": false, 00:11:27.672 "flush": false, 00:11:27.672 "reset": true, 00:11:27.672 "nvme_admin": false, 00:11:27.672 "nvme_io": false, 00:11:27.672 "nvme_io_md": false, 00:11:27.672 "write_zeroes": true, 00:11:27.672 "zcopy": false, 00:11:27.672 "get_zone_info": false, 00:11:27.672 "zone_management": false, 00:11:27.672 "zone_append": false, 00:11:27.672 "compare": false, 00:11:27.672 "compare_and_write": false, 00:11:27.672 "abort": false, 00:11:27.672 "seek_hole": false, 00:11:27.672 "seek_data": false, 00:11:27.672 "copy": false, 00:11:27.672 "nvme_iov_md": false 00:11:27.672 }, 00:11:27.672 "memory_domains": [ 00:11:27.672 { 00:11:27.672 "dma_device_id": "system", 00:11:27.672 "dma_device_type": 1 00:11:27.672 }, 00:11:27.672 { 00:11:27.672 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:27.672 "dma_device_type": 2 00:11:27.672 }, 00:11:27.672 { 00:11:27.672 "dma_device_id": "system", 00:11:27.672 "dma_device_type": 1 00:11:27.672 }, 00:11:27.672 { 00:11:27.672 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:27.672 "dma_device_type": 2 00:11:27.672 }, 00:11:27.672 { 00:11:27.672 "dma_device_id": "system", 00:11:27.672 "dma_device_type": 1 00:11:27.672 }, 00:11:27.672 { 00:11:27.672 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:27.672 "dma_device_type": 2 00:11:27.672 }, 00:11:27.672 { 00:11:27.672 "dma_device_id": "system", 00:11:27.672 "dma_device_type": 1 00:11:27.672 }, 00:11:27.672 { 00:11:27.672 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:27.672 "dma_device_type": 2 00:11:27.672 } 00:11:27.672 ], 00:11:27.672 "driver_specific": { 00:11:27.672 "raid": { 00:11:27.672 "uuid": "c4ab73d1-4288-48e3-b499-2b08a1463f1d", 00:11:27.672 "strip_size_kb": 0, 00:11:27.672 "state": "online", 00:11:27.672 "raid_level": "raid1", 00:11:27.672 "superblock": true, 00:11:27.672 "num_base_bdevs": 4, 00:11:27.672 "num_base_bdevs_discovered": 4, 00:11:27.672 "num_base_bdevs_operational": 4, 00:11:27.672 "base_bdevs_list": [ 00:11:27.672 { 00:11:27.672 "name": "pt1", 00:11:27.672 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:27.672 "is_configured": true, 00:11:27.672 "data_offset": 2048, 00:11:27.672 "data_size": 63488 00:11:27.672 }, 00:11:27.672 { 00:11:27.672 "name": "pt2", 00:11:27.672 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:27.672 "is_configured": true, 00:11:27.672 "data_offset": 2048, 00:11:27.672 "data_size": 63488 00:11:27.672 }, 00:11:27.672 { 00:11:27.672 "name": "pt3", 00:11:27.672 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:27.672 "is_configured": true, 00:11:27.672 "data_offset": 2048, 00:11:27.672 "data_size": 63488 00:11:27.672 }, 00:11:27.672 { 00:11:27.672 "name": "pt4", 00:11:27.672 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:27.672 "is_configured": true, 00:11:27.672 "data_offset": 2048, 00:11:27.672 "data_size": 63488 00:11:27.672 } 00:11:27.672 ] 00:11:27.672 } 00:11:27.672 } 00:11:27.672 }' 00:11:27.672 20:06:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:27.672 20:06:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:27.672 pt2 00:11:27.672 pt3 00:11:27.672 pt4' 00:11:27.672 20:06:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:27.672 20:06:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:27.672 20:06:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:27.672 20:06:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:27.672 20:06:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:27.672 20:06:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.672 20:06:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.672 20:06:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.931 20:06:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:27.931 20:06:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:27.931 20:06:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:27.931 20:06:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:27.931 20:06:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:27.931 20:06:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.931 20:06:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.932 20:06:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.932 20:06:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:27.932 20:06:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:27.932 20:06:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:27.932 20:06:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:27.932 20:06:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:27.932 20:06:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.932 20:06:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.932 20:06:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.932 20:06:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:27.932 20:06:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:27.932 20:06:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:27.932 20:06:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:27.932 20:06:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.932 20:06:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.932 20:06:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:27.932 20:06:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.932 20:06:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:27.932 20:06:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:27.932 20:06:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:27.932 20:06:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.932 20:06:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.932 20:06:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:11:27.932 [2024-12-08 20:06:59.802076] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:27.932 20:06:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.932 20:06:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' c4ab73d1-4288-48e3-b499-2b08a1463f1d '!=' c4ab73d1-4288-48e3-b499-2b08a1463f1d ']' 00:11:27.932 20:06:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:11:27.932 20:06:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:27.932 20:06:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:27.932 20:06:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:11:27.932 20:06:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.932 20:06:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.932 [2024-12-08 20:06:59.849756] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:11:27.932 20:06:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.932 20:06:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:27.932 20:06:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:27.932 20:06:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:27.932 20:06:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:27.932 20:06:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:27.932 20:06:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:27.932 20:06:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:27.932 20:06:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:27.932 20:06:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:27.932 20:06:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:27.932 20:06:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.932 20:06:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.932 20:06:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.932 20:06:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:27.932 20:06:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.932 20:06:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:27.932 "name": "raid_bdev1", 00:11:27.932 "uuid": "c4ab73d1-4288-48e3-b499-2b08a1463f1d", 00:11:27.932 "strip_size_kb": 0, 00:11:27.932 "state": "online", 00:11:27.932 "raid_level": "raid1", 00:11:27.932 "superblock": true, 00:11:27.932 "num_base_bdevs": 4, 00:11:27.932 "num_base_bdevs_discovered": 3, 00:11:27.932 "num_base_bdevs_operational": 3, 00:11:27.932 "base_bdevs_list": [ 00:11:27.932 { 00:11:27.932 "name": null, 00:11:27.932 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:27.932 "is_configured": false, 00:11:27.932 "data_offset": 0, 00:11:27.932 "data_size": 63488 00:11:27.932 }, 00:11:27.932 { 00:11:27.932 "name": "pt2", 00:11:27.932 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:27.932 "is_configured": true, 00:11:27.932 "data_offset": 2048, 00:11:27.932 "data_size": 63488 00:11:27.932 }, 00:11:27.932 { 00:11:27.932 "name": "pt3", 00:11:27.932 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:27.932 "is_configured": true, 00:11:27.932 "data_offset": 2048, 00:11:27.932 "data_size": 63488 00:11:27.932 }, 00:11:27.932 { 00:11:27.932 "name": "pt4", 00:11:27.932 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:27.932 "is_configured": true, 00:11:27.932 "data_offset": 2048, 00:11:27.932 "data_size": 63488 00:11:27.932 } 00:11:27.932 ] 00:11:27.932 }' 00:11:27.932 20:06:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:27.932 20:06:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.500 20:07:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:28.500 20:07:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.500 20:07:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.500 [2024-12-08 20:07:00.257076] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:28.500 [2024-12-08 20:07:00.257151] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:28.500 [2024-12-08 20:07:00.257279] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:28.500 [2024-12-08 20:07:00.257402] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:28.500 [2024-12-08 20:07:00.257462] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:11:28.500 20:07:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.500 20:07:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.500 20:07:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:11:28.500 20:07:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.500 20:07:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.500 20:07:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.500 20:07:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:11:28.500 20:07:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:11:28.500 20:07:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:11:28.500 20:07:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:28.500 20:07:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:11:28.500 20:07:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.500 20:07:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.500 20:07:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.500 20:07:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:11:28.500 20:07:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:28.500 20:07:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:11:28.500 20:07:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.500 20:07:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.500 20:07:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.500 20:07:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:11:28.500 20:07:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:28.500 20:07:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:11:28.500 20:07:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.500 20:07:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.500 20:07:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.500 20:07:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:11:28.500 20:07:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:28.500 20:07:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:11:28.500 20:07:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:11:28.500 20:07:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:28.500 20:07:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.500 20:07:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.500 [2024-12-08 20:07:00.352878] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:28.500 [2024-12-08 20:07:00.352931] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:28.500 [2024-12-08 20:07:00.352963] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:11:28.500 [2024-12-08 20:07:00.352972] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:28.500 [2024-12-08 20:07:00.355123] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:28.500 [2024-12-08 20:07:00.355159] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:28.500 [2024-12-08 20:07:00.355280] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:28.500 [2024-12-08 20:07:00.355335] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:28.500 pt2 00:11:28.500 20:07:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.500 20:07:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:11:28.500 20:07:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:28.500 20:07:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:28.500 20:07:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:28.500 20:07:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:28.500 20:07:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:28.500 20:07:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:28.500 20:07:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:28.500 20:07:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:28.500 20:07:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:28.500 20:07:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:28.500 20:07:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.500 20:07:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.500 20:07:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.500 20:07:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.500 20:07:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:28.500 "name": "raid_bdev1", 00:11:28.500 "uuid": "c4ab73d1-4288-48e3-b499-2b08a1463f1d", 00:11:28.500 "strip_size_kb": 0, 00:11:28.500 "state": "configuring", 00:11:28.500 "raid_level": "raid1", 00:11:28.500 "superblock": true, 00:11:28.500 "num_base_bdevs": 4, 00:11:28.500 "num_base_bdevs_discovered": 1, 00:11:28.500 "num_base_bdevs_operational": 3, 00:11:28.500 "base_bdevs_list": [ 00:11:28.500 { 00:11:28.500 "name": null, 00:11:28.500 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:28.500 "is_configured": false, 00:11:28.500 "data_offset": 2048, 00:11:28.500 "data_size": 63488 00:11:28.500 }, 00:11:28.500 { 00:11:28.500 "name": "pt2", 00:11:28.500 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:28.500 "is_configured": true, 00:11:28.500 "data_offset": 2048, 00:11:28.501 "data_size": 63488 00:11:28.501 }, 00:11:28.501 { 00:11:28.501 "name": null, 00:11:28.501 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:28.501 "is_configured": false, 00:11:28.501 "data_offset": 2048, 00:11:28.501 "data_size": 63488 00:11:28.501 }, 00:11:28.501 { 00:11:28.501 "name": null, 00:11:28.501 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:28.501 "is_configured": false, 00:11:28.501 "data_offset": 2048, 00:11:28.501 "data_size": 63488 00:11:28.501 } 00:11:28.501 ] 00:11:28.501 }' 00:11:28.501 20:07:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:28.501 20:07:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.069 20:07:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:11:29.069 20:07:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:11:29.069 20:07:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:29.069 20:07:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.069 20:07:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.069 [2024-12-08 20:07:00.784142] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:29.069 [2024-12-08 20:07:00.784258] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:29.069 [2024-12-08 20:07:00.784299] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:11:29.069 [2024-12-08 20:07:00.784328] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:29.069 [2024-12-08 20:07:00.784824] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:29.069 [2024-12-08 20:07:00.784884] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:29.069 [2024-12-08 20:07:00.785022] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:29.069 [2024-12-08 20:07:00.785078] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:29.069 pt3 00:11:29.069 20:07:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.069 20:07:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:11:29.069 20:07:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:29.069 20:07:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:29.069 20:07:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:29.069 20:07:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:29.069 20:07:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:29.070 20:07:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:29.070 20:07:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:29.070 20:07:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:29.070 20:07:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:29.070 20:07:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:29.070 20:07:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.070 20:07:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.070 20:07:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.070 20:07:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.070 20:07:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:29.070 "name": "raid_bdev1", 00:11:29.070 "uuid": "c4ab73d1-4288-48e3-b499-2b08a1463f1d", 00:11:29.070 "strip_size_kb": 0, 00:11:29.070 "state": "configuring", 00:11:29.070 "raid_level": "raid1", 00:11:29.070 "superblock": true, 00:11:29.070 "num_base_bdevs": 4, 00:11:29.070 "num_base_bdevs_discovered": 2, 00:11:29.070 "num_base_bdevs_operational": 3, 00:11:29.070 "base_bdevs_list": [ 00:11:29.070 { 00:11:29.070 "name": null, 00:11:29.070 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:29.070 "is_configured": false, 00:11:29.070 "data_offset": 2048, 00:11:29.070 "data_size": 63488 00:11:29.070 }, 00:11:29.070 { 00:11:29.070 "name": "pt2", 00:11:29.070 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:29.070 "is_configured": true, 00:11:29.070 "data_offset": 2048, 00:11:29.070 "data_size": 63488 00:11:29.070 }, 00:11:29.070 { 00:11:29.070 "name": "pt3", 00:11:29.070 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:29.070 "is_configured": true, 00:11:29.070 "data_offset": 2048, 00:11:29.070 "data_size": 63488 00:11:29.070 }, 00:11:29.070 { 00:11:29.070 "name": null, 00:11:29.070 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:29.070 "is_configured": false, 00:11:29.070 "data_offset": 2048, 00:11:29.070 "data_size": 63488 00:11:29.070 } 00:11:29.070 ] 00:11:29.070 }' 00:11:29.070 20:07:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:29.070 20:07:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.329 20:07:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:11:29.329 20:07:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:11:29.329 20:07:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:11:29.329 20:07:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:29.329 20:07:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.329 20:07:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.329 [2024-12-08 20:07:01.223405] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:29.329 [2024-12-08 20:07:01.223521] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:29.329 [2024-12-08 20:07:01.223592] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:11:29.329 [2024-12-08 20:07:01.223625] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:29.329 [2024-12-08 20:07:01.224109] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:29.329 [2024-12-08 20:07:01.224180] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:29.329 [2024-12-08 20:07:01.224308] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:11:29.329 [2024-12-08 20:07:01.224370] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:29.329 [2024-12-08 20:07:01.224543] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:29.329 [2024-12-08 20:07:01.224580] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:29.329 [2024-12-08 20:07:01.224860] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:11:29.329 [2024-12-08 20:07:01.225061] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:29.329 [2024-12-08 20:07:01.225110] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:11:29.329 [2024-12-08 20:07:01.225309] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:29.329 pt4 00:11:29.329 20:07:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.329 20:07:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:29.329 20:07:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:29.329 20:07:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:29.329 20:07:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:29.329 20:07:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:29.329 20:07:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:29.329 20:07:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:29.329 20:07:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:29.329 20:07:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:29.329 20:07:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:29.329 20:07:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:29.329 20:07:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.329 20:07:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.329 20:07:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.329 20:07:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.329 20:07:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:29.329 "name": "raid_bdev1", 00:11:29.329 "uuid": "c4ab73d1-4288-48e3-b499-2b08a1463f1d", 00:11:29.329 "strip_size_kb": 0, 00:11:29.329 "state": "online", 00:11:29.329 "raid_level": "raid1", 00:11:29.329 "superblock": true, 00:11:29.329 "num_base_bdevs": 4, 00:11:29.329 "num_base_bdevs_discovered": 3, 00:11:29.329 "num_base_bdevs_operational": 3, 00:11:29.329 "base_bdevs_list": [ 00:11:29.329 { 00:11:29.329 "name": null, 00:11:29.329 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:29.329 "is_configured": false, 00:11:29.329 "data_offset": 2048, 00:11:29.329 "data_size": 63488 00:11:29.329 }, 00:11:29.329 { 00:11:29.329 "name": "pt2", 00:11:29.329 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:29.329 "is_configured": true, 00:11:29.329 "data_offset": 2048, 00:11:29.329 "data_size": 63488 00:11:29.329 }, 00:11:29.329 { 00:11:29.329 "name": "pt3", 00:11:29.329 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:29.329 "is_configured": true, 00:11:29.329 "data_offset": 2048, 00:11:29.329 "data_size": 63488 00:11:29.329 }, 00:11:29.329 { 00:11:29.329 "name": "pt4", 00:11:29.329 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:29.329 "is_configured": true, 00:11:29.329 "data_offset": 2048, 00:11:29.329 "data_size": 63488 00:11:29.329 } 00:11:29.329 ] 00:11:29.329 }' 00:11:29.329 20:07:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:29.329 20:07:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.899 20:07:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:29.899 20:07:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.899 20:07:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.899 [2024-12-08 20:07:01.606772] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:29.899 [2024-12-08 20:07:01.606803] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:29.899 [2024-12-08 20:07:01.606886] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:29.899 [2024-12-08 20:07:01.606979] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:29.899 [2024-12-08 20:07:01.606992] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:11:29.899 20:07:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.899 20:07:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.899 20:07:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.899 20:07:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.899 20:07:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:11:29.899 20:07:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.899 20:07:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:11:29.899 20:07:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:11:29.899 20:07:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:11:29.899 20:07:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:11:29.899 20:07:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:11:29.899 20:07:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.899 20:07:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.899 20:07:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.899 20:07:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:29.899 20:07:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.899 20:07:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.899 [2024-12-08 20:07:01.682627] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:29.899 [2024-12-08 20:07:01.682688] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:29.899 [2024-12-08 20:07:01.682706] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:11:29.899 [2024-12-08 20:07:01.682718] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:29.899 [2024-12-08 20:07:01.684946] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:29.899 [2024-12-08 20:07:01.684993] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:29.899 [2024-12-08 20:07:01.685073] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:29.899 [2024-12-08 20:07:01.685120] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:29.899 [2024-12-08 20:07:01.685276] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:11:29.899 [2024-12-08 20:07:01.685291] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:29.899 [2024-12-08 20:07:01.685306] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:11:29.899 [2024-12-08 20:07:01.685370] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:29.899 [2024-12-08 20:07:01.685483] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:29.899 pt1 00:11:29.899 20:07:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.899 20:07:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:11:29.899 20:07:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:11:29.899 20:07:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:29.899 20:07:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:29.899 20:07:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:29.899 20:07:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:29.899 20:07:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:29.899 20:07:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:29.899 20:07:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:29.899 20:07:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:29.899 20:07:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:29.899 20:07:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.899 20:07:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:29.899 20:07:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.899 20:07:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.899 20:07:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.899 20:07:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:29.899 "name": "raid_bdev1", 00:11:29.899 "uuid": "c4ab73d1-4288-48e3-b499-2b08a1463f1d", 00:11:29.899 "strip_size_kb": 0, 00:11:29.899 "state": "configuring", 00:11:29.899 "raid_level": "raid1", 00:11:29.899 "superblock": true, 00:11:29.899 "num_base_bdevs": 4, 00:11:29.899 "num_base_bdevs_discovered": 2, 00:11:29.899 "num_base_bdevs_operational": 3, 00:11:29.899 "base_bdevs_list": [ 00:11:29.899 { 00:11:29.899 "name": null, 00:11:29.899 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:29.899 "is_configured": false, 00:11:29.899 "data_offset": 2048, 00:11:29.899 "data_size": 63488 00:11:29.899 }, 00:11:29.899 { 00:11:29.899 "name": "pt2", 00:11:29.899 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:29.899 "is_configured": true, 00:11:29.899 "data_offset": 2048, 00:11:29.899 "data_size": 63488 00:11:29.899 }, 00:11:29.899 { 00:11:29.899 "name": "pt3", 00:11:29.899 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:29.899 "is_configured": true, 00:11:29.899 "data_offset": 2048, 00:11:29.899 "data_size": 63488 00:11:29.899 }, 00:11:29.899 { 00:11:29.899 "name": null, 00:11:29.899 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:29.899 "is_configured": false, 00:11:29.899 "data_offset": 2048, 00:11:29.899 "data_size": 63488 00:11:29.899 } 00:11:29.899 ] 00:11:29.899 }' 00:11:29.899 20:07:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:29.899 20:07:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.159 20:07:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:11:30.159 20:07:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.159 20:07:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.159 20:07:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:11:30.159 20:07:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.419 20:07:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:11:30.419 20:07:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:30.419 20:07:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.419 20:07:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.419 [2024-12-08 20:07:02.153877] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:30.419 [2024-12-08 20:07:02.153986] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:30.419 [2024-12-08 20:07:02.154026] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:11:30.419 [2024-12-08 20:07:02.154055] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:30.419 [2024-12-08 20:07:02.154528] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:30.419 [2024-12-08 20:07:02.154593] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:30.419 [2024-12-08 20:07:02.154722] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:11:30.419 [2024-12-08 20:07:02.154774] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:30.419 [2024-12-08 20:07:02.154967] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:11:30.419 [2024-12-08 20:07:02.155007] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:30.420 [2024-12-08 20:07:02.155336] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:11:30.420 [2024-12-08 20:07:02.155549] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:11:30.420 [2024-12-08 20:07:02.155595] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:11:30.420 [2024-12-08 20:07:02.155794] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:30.420 pt4 00:11:30.420 20:07:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.420 20:07:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:30.420 20:07:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:30.420 20:07:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:30.420 20:07:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:30.420 20:07:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:30.420 20:07:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:30.420 20:07:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:30.420 20:07:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:30.420 20:07:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:30.420 20:07:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:30.420 20:07:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:30.420 20:07:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:30.420 20:07:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.420 20:07:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.420 20:07:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.420 20:07:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:30.420 "name": "raid_bdev1", 00:11:30.420 "uuid": "c4ab73d1-4288-48e3-b499-2b08a1463f1d", 00:11:30.420 "strip_size_kb": 0, 00:11:30.420 "state": "online", 00:11:30.420 "raid_level": "raid1", 00:11:30.420 "superblock": true, 00:11:30.420 "num_base_bdevs": 4, 00:11:30.420 "num_base_bdevs_discovered": 3, 00:11:30.420 "num_base_bdevs_operational": 3, 00:11:30.420 "base_bdevs_list": [ 00:11:30.420 { 00:11:30.420 "name": null, 00:11:30.420 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:30.420 "is_configured": false, 00:11:30.420 "data_offset": 2048, 00:11:30.420 "data_size": 63488 00:11:30.420 }, 00:11:30.420 { 00:11:30.420 "name": "pt2", 00:11:30.420 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:30.420 "is_configured": true, 00:11:30.420 "data_offset": 2048, 00:11:30.420 "data_size": 63488 00:11:30.420 }, 00:11:30.420 { 00:11:30.420 "name": "pt3", 00:11:30.420 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:30.420 "is_configured": true, 00:11:30.420 "data_offset": 2048, 00:11:30.420 "data_size": 63488 00:11:30.420 }, 00:11:30.420 { 00:11:30.420 "name": "pt4", 00:11:30.420 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:30.420 "is_configured": true, 00:11:30.420 "data_offset": 2048, 00:11:30.420 "data_size": 63488 00:11:30.420 } 00:11:30.420 ] 00:11:30.420 }' 00:11:30.420 20:07:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:30.420 20:07:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.681 20:07:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:11:30.681 20:07:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.681 20:07:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.681 20:07:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:11:30.681 20:07:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.681 20:07:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:11:30.941 20:07:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:30.941 20:07:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:11:30.941 20:07:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.941 20:07:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.941 [2024-12-08 20:07:02.665272] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:30.941 20:07:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.941 20:07:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' c4ab73d1-4288-48e3-b499-2b08a1463f1d '!=' c4ab73d1-4288-48e3-b499-2b08a1463f1d ']' 00:11:30.941 20:07:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 74307 00:11:30.941 20:07:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 74307 ']' 00:11:30.941 20:07:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 74307 00:11:30.941 20:07:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:11:30.941 20:07:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:30.941 20:07:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74307 00:11:30.941 20:07:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:30.941 20:07:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:30.941 20:07:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74307' 00:11:30.941 killing process with pid 74307 00:11:30.941 20:07:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 74307 00:11:30.941 [2024-12-08 20:07:02.742954] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:30.941 [2024-12-08 20:07:02.743066] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:30.941 [2024-12-08 20:07:02.743150] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:30.941 [2024-12-08 20:07:02.743162] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:11:30.941 20:07:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 74307 00:11:31.200 [2024-12-08 20:07:03.131834] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:32.581 20:07:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:11:32.581 00:11:32.581 real 0m8.074s 00:11:32.581 user 0m12.662s 00:11:32.581 sys 0m1.429s 00:11:32.581 ************************************ 00:11:32.581 END TEST raid_superblock_test 00:11:32.581 ************************************ 00:11:32.581 20:07:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:32.581 20:07:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.581 20:07:04 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:11:32.581 20:07:04 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:32.581 20:07:04 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:32.581 20:07:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:32.581 ************************************ 00:11:32.581 START TEST raid_read_error_test 00:11:32.581 ************************************ 00:11:32.581 20:07:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 4 read 00:11:32.581 20:07:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:11:32.581 20:07:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:32.581 20:07:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:11:32.581 20:07:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:32.581 20:07:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:32.581 20:07:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:32.581 20:07:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:32.581 20:07:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:32.581 20:07:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:32.581 20:07:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:32.581 20:07:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:32.581 20:07:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:32.581 20:07:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:32.581 20:07:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:32.581 20:07:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:32.581 20:07:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:32.581 20:07:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:32.581 20:07:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:32.581 20:07:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:32.581 20:07:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:32.581 20:07:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:32.581 20:07:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:32.581 20:07:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:32.581 20:07:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:32.581 20:07:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:11:32.581 20:07:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:11:32.581 20:07:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:32.581 20:07:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.bJRu5SUkyj 00:11:32.581 20:07:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=74789 00:11:32.581 20:07:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:32.581 20:07:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 74789 00:11:32.581 20:07:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 74789 ']' 00:11:32.581 20:07:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:32.581 20:07:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:32.581 20:07:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:32.581 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:32.581 20:07:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:32.581 20:07:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.581 [2024-12-08 20:07:04.414126] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:11:32.581 [2024-12-08 20:07:04.414245] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74789 ] 00:11:32.842 [2024-12-08 20:07:04.588920] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:32.842 [2024-12-08 20:07:04.704311] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:33.101 [2024-12-08 20:07:04.897970] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:33.101 [2024-12-08 20:07:04.898036] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:33.360 20:07:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:33.360 20:07:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:33.360 20:07:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:33.360 20:07:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:33.360 20:07:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.360 20:07:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.360 BaseBdev1_malloc 00:11:33.360 20:07:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.360 20:07:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:33.360 20:07:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.360 20:07:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.360 true 00:11:33.360 20:07:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.360 20:07:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:33.360 20:07:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.360 20:07:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.360 [2024-12-08 20:07:05.292507] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:33.360 [2024-12-08 20:07:05.292765] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:33.360 [2024-12-08 20:07:05.292837] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:33.360 [2024-12-08 20:07:05.292928] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:33.360 [2024-12-08 20:07:05.295051] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:33.360 [2024-12-08 20:07:05.295246] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:33.360 BaseBdev1 00:11:33.360 20:07:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.360 20:07:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:33.360 20:07:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:33.360 20:07:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.360 20:07:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.620 BaseBdev2_malloc 00:11:33.620 20:07:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.620 20:07:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:33.620 20:07:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.620 20:07:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.620 true 00:11:33.620 20:07:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.620 20:07:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:33.620 20:07:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.620 20:07:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.620 [2024-12-08 20:07:05.357893] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:33.620 [2024-12-08 20:07:05.358102] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:33.620 [2024-12-08 20:07:05.358126] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:33.620 [2024-12-08 20:07:05.358136] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:33.620 [2024-12-08 20:07:05.360385] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:33.620 [2024-12-08 20:07:05.360519] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:33.620 BaseBdev2 00:11:33.620 20:07:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.620 20:07:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:33.620 20:07:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:33.620 20:07:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.620 20:07:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.620 BaseBdev3_malloc 00:11:33.620 20:07:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.620 20:07:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:33.620 20:07:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.620 20:07:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.620 true 00:11:33.620 20:07:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.620 20:07:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:33.620 20:07:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.620 20:07:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.620 [2024-12-08 20:07:05.437538] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:33.620 [2024-12-08 20:07:05.437912] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:33.620 [2024-12-08 20:07:05.438019] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:33.620 [2024-12-08 20:07:05.438104] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:33.620 [2024-12-08 20:07:05.440207] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:33.620 [2024-12-08 20:07:05.440342] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:33.620 BaseBdev3 00:11:33.620 20:07:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.620 20:07:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:33.620 20:07:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:33.620 20:07:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.620 20:07:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.620 BaseBdev4_malloc 00:11:33.620 20:07:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.620 20:07:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:33.620 20:07:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.620 20:07:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.620 true 00:11:33.620 20:07:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.620 20:07:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:33.620 20:07:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.620 20:07:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.620 [2024-12-08 20:07:05.504220] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:33.620 [2024-12-08 20:07:05.504411] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:33.620 [2024-12-08 20:07:05.504436] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:33.620 [2024-12-08 20:07:05.504446] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:33.620 [2024-12-08 20:07:05.506473] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:33.620 [2024-12-08 20:07:05.506514] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:33.620 BaseBdev4 00:11:33.620 20:07:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.620 20:07:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:33.620 20:07:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.620 20:07:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.620 [2024-12-08 20:07:05.516251] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:33.620 [2024-12-08 20:07:05.517982] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:33.620 [2024-12-08 20:07:05.518054] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:33.620 [2024-12-08 20:07:05.518111] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:33.620 [2024-12-08 20:07:05.518334] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:11:33.620 [2024-12-08 20:07:05.518348] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:33.620 [2024-12-08 20:07:05.518567] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:11:33.620 [2024-12-08 20:07:05.518743] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:11:33.620 [2024-12-08 20:07:05.518752] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:11:33.620 [2024-12-08 20:07:05.518896] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:33.620 20:07:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.620 20:07:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:11:33.620 20:07:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:33.620 20:07:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:33.620 20:07:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:33.620 20:07:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:33.620 20:07:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:33.620 20:07:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:33.620 20:07:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:33.620 20:07:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:33.620 20:07:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:33.620 20:07:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:33.620 20:07:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:33.620 20:07:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.620 20:07:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.620 20:07:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.620 20:07:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:33.620 "name": "raid_bdev1", 00:11:33.620 "uuid": "d80c3cd1-d153-4927-acac-03197d3a592d", 00:11:33.620 "strip_size_kb": 0, 00:11:33.620 "state": "online", 00:11:33.620 "raid_level": "raid1", 00:11:33.620 "superblock": true, 00:11:33.620 "num_base_bdevs": 4, 00:11:33.620 "num_base_bdevs_discovered": 4, 00:11:33.620 "num_base_bdevs_operational": 4, 00:11:33.620 "base_bdevs_list": [ 00:11:33.620 { 00:11:33.620 "name": "BaseBdev1", 00:11:33.620 "uuid": "0d292262-97c4-526a-bcb8-ef90133184ae", 00:11:33.620 "is_configured": true, 00:11:33.620 "data_offset": 2048, 00:11:33.620 "data_size": 63488 00:11:33.620 }, 00:11:33.620 { 00:11:33.620 "name": "BaseBdev2", 00:11:33.620 "uuid": "c75680ba-9152-55df-bbc8-084206f90f26", 00:11:33.621 "is_configured": true, 00:11:33.621 "data_offset": 2048, 00:11:33.621 "data_size": 63488 00:11:33.621 }, 00:11:33.621 { 00:11:33.621 "name": "BaseBdev3", 00:11:33.621 "uuid": "1380969c-7cd3-50c6-8925-0d0b0242b2af", 00:11:33.621 "is_configured": true, 00:11:33.621 "data_offset": 2048, 00:11:33.621 "data_size": 63488 00:11:33.621 }, 00:11:33.621 { 00:11:33.621 "name": "BaseBdev4", 00:11:33.621 "uuid": "6d16f26a-8f6f-57aa-be19-8ca6b3344fd5", 00:11:33.621 "is_configured": true, 00:11:33.621 "data_offset": 2048, 00:11:33.621 "data_size": 63488 00:11:33.621 } 00:11:33.621 ] 00:11:33.621 }' 00:11:33.621 20:07:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:33.621 20:07:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.190 20:07:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:34.190 20:07:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:34.190 [2024-12-08 20:07:06.052770] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:11:35.131 20:07:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:11:35.131 20:07:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.131 20:07:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.131 20:07:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.131 20:07:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:35.131 20:07:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:11:35.131 20:07:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:11:35.131 20:07:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:11:35.131 20:07:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:11:35.131 20:07:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:35.131 20:07:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:35.131 20:07:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:35.131 20:07:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:35.131 20:07:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:35.131 20:07:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:35.131 20:07:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:35.131 20:07:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:35.131 20:07:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:35.131 20:07:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:35.131 20:07:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:35.131 20:07:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.131 20:07:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.131 20:07:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.131 20:07:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:35.131 "name": "raid_bdev1", 00:11:35.131 "uuid": "d80c3cd1-d153-4927-acac-03197d3a592d", 00:11:35.131 "strip_size_kb": 0, 00:11:35.131 "state": "online", 00:11:35.131 "raid_level": "raid1", 00:11:35.131 "superblock": true, 00:11:35.131 "num_base_bdevs": 4, 00:11:35.131 "num_base_bdevs_discovered": 4, 00:11:35.131 "num_base_bdevs_operational": 4, 00:11:35.131 "base_bdevs_list": [ 00:11:35.131 { 00:11:35.131 "name": "BaseBdev1", 00:11:35.131 "uuid": "0d292262-97c4-526a-bcb8-ef90133184ae", 00:11:35.131 "is_configured": true, 00:11:35.131 "data_offset": 2048, 00:11:35.131 "data_size": 63488 00:11:35.131 }, 00:11:35.131 { 00:11:35.131 "name": "BaseBdev2", 00:11:35.131 "uuid": "c75680ba-9152-55df-bbc8-084206f90f26", 00:11:35.131 "is_configured": true, 00:11:35.131 "data_offset": 2048, 00:11:35.131 "data_size": 63488 00:11:35.131 }, 00:11:35.131 { 00:11:35.131 "name": "BaseBdev3", 00:11:35.131 "uuid": "1380969c-7cd3-50c6-8925-0d0b0242b2af", 00:11:35.131 "is_configured": true, 00:11:35.131 "data_offset": 2048, 00:11:35.131 "data_size": 63488 00:11:35.131 }, 00:11:35.131 { 00:11:35.131 "name": "BaseBdev4", 00:11:35.131 "uuid": "6d16f26a-8f6f-57aa-be19-8ca6b3344fd5", 00:11:35.131 "is_configured": true, 00:11:35.131 "data_offset": 2048, 00:11:35.131 "data_size": 63488 00:11:35.131 } 00:11:35.131 ] 00:11:35.131 }' 00:11:35.131 20:07:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:35.131 20:07:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.702 20:07:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:35.702 20:07:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.702 20:07:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.702 [2024-12-08 20:07:07.433567] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:35.702 [2024-12-08 20:07:07.433674] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:35.702 [2024-12-08 20:07:07.436399] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:35.702 [2024-12-08 20:07:07.436456] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:35.702 [2024-12-08 20:07:07.436570] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:35.702 [2024-12-08 20:07:07.436582] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:11:35.702 { 00:11:35.702 "results": [ 00:11:35.702 { 00:11:35.702 "job": "raid_bdev1", 00:11:35.702 "core_mask": "0x1", 00:11:35.702 "workload": "randrw", 00:11:35.702 "percentage": 50, 00:11:35.702 "status": "finished", 00:11:35.702 "queue_depth": 1, 00:11:35.702 "io_size": 131072, 00:11:35.702 "runtime": 1.381794, 00:11:35.702 "iops": 10579.72461886504, 00:11:35.702 "mibps": 1322.46557735813, 00:11:35.702 "io_failed": 0, 00:11:35.702 "io_timeout": 0, 00:11:35.702 "avg_latency_us": 91.86622350348038, 00:11:35.702 "min_latency_us": 24.146724890829695, 00:11:35.702 "max_latency_us": 1609.7816593886462 00:11:35.702 } 00:11:35.702 ], 00:11:35.702 "core_count": 1 00:11:35.702 } 00:11:35.702 20:07:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.702 20:07:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 74789 00:11:35.702 20:07:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 74789 ']' 00:11:35.702 20:07:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 74789 00:11:35.702 20:07:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:11:35.702 20:07:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:35.702 20:07:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74789 00:11:35.702 20:07:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:35.702 20:07:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:35.702 20:07:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74789' 00:11:35.702 killing process with pid 74789 00:11:35.702 20:07:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 74789 00:11:35.702 [2024-12-08 20:07:07.481815] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:35.702 20:07:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 74789 00:11:35.962 [2024-12-08 20:07:07.797313] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:37.347 20:07:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:37.347 20:07:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:37.347 20:07:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.bJRu5SUkyj 00:11:37.347 20:07:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:11:37.347 20:07:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:11:37.347 20:07:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:37.347 20:07:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:37.347 20:07:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:11:37.347 00:11:37.347 real 0m4.629s 00:11:37.347 user 0m5.421s 00:11:37.347 sys 0m0.580s 00:11:37.347 20:07:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:37.347 20:07:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.347 ************************************ 00:11:37.347 END TEST raid_read_error_test 00:11:37.347 ************************************ 00:11:37.347 20:07:08 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:11:37.347 20:07:08 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:37.347 20:07:08 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:37.347 20:07:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:37.347 ************************************ 00:11:37.347 START TEST raid_write_error_test 00:11:37.347 ************************************ 00:11:37.347 20:07:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 4 write 00:11:37.347 20:07:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:11:37.347 20:07:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:37.347 20:07:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:11:37.347 20:07:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:37.347 20:07:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:37.347 20:07:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:37.347 20:07:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:37.347 20:07:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:37.347 20:07:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:37.347 20:07:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:37.347 20:07:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:37.347 20:07:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:37.347 20:07:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:37.347 20:07:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:37.347 20:07:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:37.347 20:07:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:37.347 20:07:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:37.347 20:07:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:37.347 20:07:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:37.347 20:07:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:37.347 20:07:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:37.347 20:07:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:37.347 20:07:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:37.347 20:07:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:37.347 20:07:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:11:37.347 20:07:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:11:37.347 20:07:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:37.347 20:07:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.cJGX3RUPza 00:11:37.347 20:07:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=74933 00:11:37.347 20:07:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:37.347 20:07:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 74933 00:11:37.347 20:07:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 74933 ']' 00:11:37.347 20:07:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:37.347 20:07:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:37.347 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:37.347 20:07:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:37.347 20:07:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:37.347 20:07:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.347 [2024-12-08 20:07:09.112356] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:11:37.347 [2024-12-08 20:07:09.113010] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74933 ] 00:11:37.347 [2024-12-08 20:07:09.283506] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:37.607 [2024-12-08 20:07:09.391784] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:37.608 [2024-12-08 20:07:09.583661] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:37.868 [2024-12-08 20:07:09.583807] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:38.129 20:07:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:38.129 20:07:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:38.129 20:07:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:38.129 20:07:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:38.129 20:07:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.129 20:07:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.129 BaseBdev1_malloc 00:11:38.129 20:07:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.129 20:07:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:38.129 20:07:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.129 20:07:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.129 true 00:11:38.129 20:07:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.129 20:07:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:38.129 20:07:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.129 20:07:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.129 [2024-12-08 20:07:09.976328] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:38.129 [2024-12-08 20:07:09.976384] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:38.129 [2024-12-08 20:07:09.976402] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:38.129 [2024-12-08 20:07:09.976412] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:38.129 [2024-12-08 20:07:09.978383] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:38.129 [2024-12-08 20:07:09.978498] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:38.129 BaseBdev1 00:11:38.129 20:07:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.129 20:07:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:38.129 20:07:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:38.129 20:07:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.129 20:07:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.129 BaseBdev2_malloc 00:11:38.129 20:07:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.129 20:07:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:38.129 20:07:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.129 20:07:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.129 true 00:11:38.129 20:07:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.129 20:07:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:38.129 20:07:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.129 20:07:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.129 [2024-12-08 20:07:10.040046] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:38.129 [2024-12-08 20:07:10.040162] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:38.129 [2024-12-08 20:07:10.040183] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:38.129 [2024-12-08 20:07:10.040194] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:38.129 [2024-12-08 20:07:10.042213] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:38.129 [2024-12-08 20:07:10.042251] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:38.129 BaseBdev2 00:11:38.129 20:07:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.129 20:07:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:38.129 20:07:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:38.129 20:07:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.129 20:07:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.129 BaseBdev3_malloc 00:11:38.129 20:07:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.129 20:07:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:38.129 20:07:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.129 20:07:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.390 true 00:11:38.390 20:07:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.391 20:07:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:38.391 20:07:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.391 20:07:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.391 [2024-12-08 20:07:10.121295] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:38.391 [2024-12-08 20:07:10.121349] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:38.391 [2024-12-08 20:07:10.121367] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:38.391 [2024-12-08 20:07:10.121378] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:38.391 [2024-12-08 20:07:10.123663] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:38.391 [2024-12-08 20:07:10.123752] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:38.391 BaseBdev3 00:11:38.391 20:07:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.391 20:07:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:38.391 20:07:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:38.391 20:07:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.391 20:07:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.391 BaseBdev4_malloc 00:11:38.391 20:07:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.391 20:07:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:38.391 20:07:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.391 20:07:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.391 true 00:11:38.391 20:07:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.391 20:07:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:38.391 20:07:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.391 20:07:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.391 [2024-12-08 20:07:10.189621] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:38.391 [2024-12-08 20:07:10.189716] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:38.391 [2024-12-08 20:07:10.189763] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:38.391 [2024-12-08 20:07:10.189795] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:38.391 [2024-12-08 20:07:10.192188] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:38.391 [2024-12-08 20:07:10.192269] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:38.391 BaseBdev4 00:11:38.391 20:07:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.391 20:07:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:38.391 20:07:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.391 20:07:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.391 [2024-12-08 20:07:10.201651] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:38.391 [2024-12-08 20:07:10.203528] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:38.391 [2024-12-08 20:07:10.203656] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:38.391 [2024-12-08 20:07:10.203742] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:38.391 [2024-12-08 20:07:10.203992] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:11:38.391 [2024-12-08 20:07:10.204008] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:38.391 [2024-12-08 20:07:10.204260] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:11:38.391 [2024-12-08 20:07:10.204430] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:11:38.391 [2024-12-08 20:07:10.204440] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:11:38.391 [2024-12-08 20:07:10.204594] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:38.391 20:07:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.391 20:07:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:11:38.391 20:07:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:38.391 20:07:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:38.391 20:07:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:38.391 20:07:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:38.391 20:07:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:38.391 20:07:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:38.391 20:07:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:38.391 20:07:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:38.391 20:07:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:38.391 20:07:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:38.391 20:07:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:38.391 20:07:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.391 20:07:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.391 20:07:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.391 20:07:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:38.391 "name": "raid_bdev1", 00:11:38.391 "uuid": "8437c9f6-0e54-46f8-b1fd-476cd905b07d", 00:11:38.391 "strip_size_kb": 0, 00:11:38.391 "state": "online", 00:11:38.391 "raid_level": "raid1", 00:11:38.391 "superblock": true, 00:11:38.391 "num_base_bdevs": 4, 00:11:38.391 "num_base_bdevs_discovered": 4, 00:11:38.391 "num_base_bdevs_operational": 4, 00:11:38.391 "base_bdevs_list": [ 00:11:38.391 { 00:11:38.391 "name": "BaseBdev1", 00:11:38.391 "uuid": "1f61c455-a9f8-596e-abd0-0d09d00f2d4d", 00:11:38.391 "is_configured": true, 00:11:38.391 "data_offset": 2048, 00:11:38.391 "data_size": 63488 00:11:38.391 }, 00:11:38.391 { 00:11:38.391 "name": "BaseBdev2", 00:11:38.391 "uuid": "f4e27870-b236-51e1-a08a-cac9e5524b01", 00:11:38.391 "is_configured": true, 00:11:38.391 "data_offset": 2048, 00:11:38.391 "data_size": 63488 00:11:38.391 }, 00:11:38.391 { 00:11:38.391 "name": "BaseBdev3", 00:11:38.391 "uuid": "e7a6fbe0-9210-56ff-b275-db6d1510fdfd", 00:11:38.391 "is_configured": true, 00:11:38.391 "data_offset": 2048, 00:11:38.391 "data_size": 63488 00:11:38.391 }, 00:11:38.391 { 00:11:38.391 "name": "BaseBdev4", 00:11:38.391 "uuid": "e9138614-95b0-5b0d-b529-b64170a2df31", 00:11:38.391 "is_configured": true, 00:11:38.391 "data_offset": 2048, 00:11:38.391 "data_size": 63488 00:11:38.391 } 00:11:38.391 ] 00:11:38.391 }' 00:11:38.391 20:07:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:38.391 20:07:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.962 20:07:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:38.962 20:07:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:38.962 [2024-12-08 20:07:10.714091] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:11:39.903 20:07:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:39.903 20:07:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.903 20:07:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.903 [2024-12-08 20:07:11.653197] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:11:39.903 [2024-12-08 20:07:11.653334] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:39.903 [2024-12-08 20:07:11.653599] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006a40 00:11:39.903 20:07:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.903 20:07:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:39.903 20:07:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:11:39.903 20:07:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:11:39.903 20:07:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:11:39.903 20:07:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:39.903 20:07:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:39.903 20:07:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:39.903 20:07:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:39.903 20:07:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:39.903 20:07:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:39.903 20:07:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:39.903 20:07:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:39.903 20:07:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:39.903 20:07:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:39.903 20:07:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.903 20:07:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.903 20:07:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:39.903 20:07:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.903 20:07:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.903 20:07:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:39.903 "name": "raid_bdev1", 00:11:39.903 "uuid": "8437c9f6-0e54-46f8-b1fd-476cd905b07d", 00:11:39.903 "strip_size_kb": 0, 00:11:39.903 "state": "online", 00:11:39.903 "raid_level": "raid1", 00:11:39.903 "superblock": true, 00:11:39.903 "num_base_bdevs": 4, 00:11:39.903 "num_base_bdevs_discovered": 3, 00:11:39.903 "num_base_bdevs_operational": 3, 00:11:39.903 "base_bdevs_list": [ 00:11:39.903 { 00:11:39.903 "name": null, 00:11:39.903 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:39.903 "is_configured": false, 00:11:39.903 "data_offset": 0, 00:11:39.903 "data_size": 63488 00:11:39.903 }, 00:11:39.903 { 00:11:39.903 "name": "BaseBdev2", 00:11:39.903 "uuid": "f4e27870-b236-51e1-a08a-cac9e5524b01", 00:11:39.903 "is_configured": true, 00:11:39.903 "data_offset": 2048, 00:11:39.903 "data_size": 63488 00:11:39.903 }, 00:11:39.903 { 00:11:39.903 "name": "BaseBdev3", 00:11:39.903 "uuid": "e7a6fbe0-9210-56ff-b275-db6d1510fdfd", 00:11:39.903 "is_configured": true, 00:11:39.903 "data_offset": 2048, 00:11:39.903 "data_size": 63488 00:11:39.903 }, 00:11:39.903 { 00:11:39.903 "name": "BaseBdev4", 00:11:39.903 "uuid": "e9138614-95b0-5b0d-b529-b64170a2df31", 00:11:39.903 "is_configured": true, 00:11:39.903 "data_offset": 2048, 00:11:39.903 "data_size": 63488 00:11:39.903 } 00:11:39.903 ] 00:11:39.903 }' 00:11:39.903 20:07:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:39.903 20:07:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.164 20:07:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:40.164 20:07:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.164 20:07:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.164 [2024-12-08 20:07:12.056514] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:40.164 [2024-12-08 20:07:12.056548] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:40.164 [2024-12-08 20:07:12.059345] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:40.164 [2024-12-08 20:07:12.059472] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:40.164 [2024-12-08 20:07:12.059617] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:40.164 [2024-12-08 20:07:12.059672] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:11:40.164 { 00:11:40.164 "results": [ 00:11:40.164 { 00:11:40.164 "job": "raid_bdev1", 00:11:40.164 "core_mask": "0x1", 00:11:40.164 "workload": "randrw", 00:11:40.164 "percentage": 50, 00:11:40.164 "status": "finished", 00:11:40.164 "queue_depth": 1, 00:11:40.164 "io_size": 131072, 00:11:40.164 "runtime": 1.343098, 00:11:40.164 "iops": 11408.698397287466, 00:11:40.164 "mibps": 1426.0872996609332, 00:11:40.164 "io_failed": 0, 00:11:40.164 "io_timeout": 0, 00:11:40.164 "avg_latency_us": 84.99761439762757, 00:11:40.164 "min_latency_us": 22.805240174672488, 00:11:40.164 "max_latency_us": 1438.071615720524 00:11:40.164 } 00:11:40.164 ], 00:11:40.164 "core_count": 1 00:11:40.164 } 00:11:40.164 20:07:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.164 20:07:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 74933 00:11:40.164 20:07:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 74933 ']' 00:11:40.164 20:07:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 74933 00:11:40.164 20:07:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:11:40.164 20:07:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:40.164 20:07:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74933 00:11:40.164 20:07:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:40.164 20:07:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:40.164 20:07:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74933' 00:11:40.164 killing process with pid 74933 00:11:40.164 20:07:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 74933 00:11:40.164 [2024-12-08 20:07:12.096347] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:40.164 20:07:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 74933 00:11:40.734 [2024-12-08 20:07:12.415008] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:41.674 20:07:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:41.674 20:07:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.cJGX3RUPza 00:11:41.674 20:07:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:41.674 ************************************ 00:11:41.674 END TEST raid_write_error_test 00:11:41.674 ************************************ 00:11:41.674 20:07:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:11:41.674 20:07:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:11:41.675 20:07:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:41.675 20:07:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:41.675 20:07:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:11:41.675 00:11:41.675 real 0m4.555s 00:11:41.675 user 0m5.321s 00:11:41.675 sys 0m0.541s 00:11:41.675 20:07:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:41.675 20:07:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.675 20:07:13 bdev_raid -- bdev/bdev_raid.sh@976 -- # '[' true = true ']' 00:11:41.675 20:07:13 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:11:41.675 20:07:13 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false true 00:11:41.675 20:07:13 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:11:41.675 20:07:13 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:41.675 20:07:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:41.675 ************************************ 00:11:41.675 START TEST raid_rebuild_test 00:11:41.675 ************************************ 00:11:41.675 20:07:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 false false true 00:11:41.675 20:07:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:11:41.675 20:07:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:11:41.675 20:07:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:11:41.675 20:07:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:11:41.675 20:07:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:11:41.675 20:07:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:11:41.675 20:07:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:41.675 20:07:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:11:41.675 20:07:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:41.675 20:07:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:41.675 20:07:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:11:41.675 20:07:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:41.675 20:07:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:41.675 20:07:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:41.675 20:07:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:11:41.675 20:07:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:11:41.675 20:07:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:11:41.675 20:07:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:11:41.675 20:07:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:11:41.675 20:07:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:11:41.675 20:07:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:11:41.675 20:07:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:11:41.675 20:07:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:11:41.675 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:41.675 20:07:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=75078 00:11:41.675 20:07:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 75078 00:11:41.675 20:07:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:11:41.675 20:07:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 75078 ']' 00:11:41.675 20:07:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:41.675 20:07:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:41.675 20:07:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:41.675 20:07:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:41.675 20:07:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.935 [2024-12-08 20:07:13.722815] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:11:41.935 [2024-12-08 20:07:13.723030] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:11:41.935 Zero copy mechanism will not be used. 00:11:41.935 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75078 ] 00:11:41.935 [2024-12-08 20:07:13.893651] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:42.194 [2024-12-08 20:07:14.002315] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:42.454 [2024-12-08 20:07:14.193185] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:42.454 [2024-12-08 20:07:14.193299] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:42.716 20:07:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:42.716 20:07:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:11:42.716 20:07:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:42.716 20:07:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:42.716 20:07:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.716 20:07:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.716 BaseBdev1_malloc 00:11:42.716 20:07:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.716 20:07:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:11:42.716 20:07:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.716 20:07:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.716 [2024-12-08 20:07:14.588718] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:11:42.717 [2024-12-08 20:07:14.588819] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:42.717 [2024-12-08 20:07:14.588860] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:42.717 [2024-12-08 20:07:14.588890] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:42.717 [2024-12-08 20:07:14.591050] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:42.717 [2024-12-08 20:07:14.591123] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:42.717 BaseBdev1 00:11:42.717 20:07:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.717 20:07:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:42.717 20:07:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:42.717 20:07:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.717 20:07:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.717 BaseBdev2_malloc 00:11:42.717 20:07:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.717 20:07:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:11:42.717 20:07:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.717 20:07:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.717 [2024-12-08 20:07:14.640154] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:11:42.717 [2024-12-08 20:07:14.640214] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:42.717 [2024-12-08 20:07:14.640238] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:42.717 [2024-12-08 20:07:14.640248] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:42.717 [2024-12-08 20:07:14.642348] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:42.717 [2024-12-08 20:07:14.642443] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:42.717 BaseBdev2 00:11:42.717 20:07:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.717 20:07:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:11:42.717 20:07:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.717 20:07:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.979 spare_malloc 00:11:42.979 20:07:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.979 20:07:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:11:42.979 20:07:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.979 20:07:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.979 spare_delay 00:11:42.979 20:07:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.979 20:07:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:11:42.979 20:07:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.979 20:07:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.979 [2024-12-08 20:07:14.716166] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:11:42.979 [2024-12-08 20:07:14.716226] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:42.979 [2024-12-08 20:07:14.716246] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:11:42.979 [2024-12-08 20:07:14.716256] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:42.979 [2024-12-08 20:07:14.718341] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:42.979 [2024-12-08 20:07:14.718464] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:11:42.979 spare 00:11:42.979 20:07:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.979 20:07:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:11:42.979 20:07:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.979 20:07:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.979 [2024-12-08 20:07:14.728201] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:42.979 [2024-12-08 20:07:14.729986] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:42.979 [2024-12-08 20:07:14.730075] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:42.979 [2024-12-08 20:07:14.730088] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:42.979 [2024-12-08 20:07:14.730334] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:42.979 [2024-12-08 20:07:14.730504] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:42.979 [2024-12-08 20:07:14.730514] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:11:42.979 [2024-12-08 20:07:14.730656] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:42.979 20:07:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.979 20:07:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:42.979 20:07:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:42.979 20:07:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:42.979 20:07:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:42.979 20:07:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:42.979 20:07:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:42.979 20:07:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:42.979 20:07:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:42.979 20:07:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:42.979 20:07:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:42.979 20:07:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:42.979 20:07:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:42.979 20:07:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.979 20:07:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.979 20:07:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.979 20:07:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:42.979 "name": "raid_bdev1", 00:11:42.979 "uuid": "f3413f4f-6aee-4ea0-9bdb-e563e460fcd7", 00:11:42.979 "strip_size_kb": 0, 00:11:42.979 "state": "online", 00:11:42.979 "raid_level": "raid1", 00:11:42.979 "superblock": false, 00:11:42.979 "num_base_bdevs": 2, 00:11:42.979 "num_base_bdevs_discovered": 2, 00:11:42.979 "num_base_bdevs_operational": 2, 00:11:42.979 "base_bdevs_list": [ 00:11:42.979 { 00:11:42.979 "name": "BaseBdev1", 00:11:42.979 "uuid": "a2896ead-f0f5-595f-983d-69b181c47e65", 00:11:42.979 "is_configured": true, 00:11:42.979 "data_offset": 0, 00:11:42.979 "data_size": 65536 00:11:42.979 }, 00:11:42.979 { 00:11:42.979 "name": "BaseBdev2", 00:11:42.979 "uuid": "d96106a4-1a6c-5c34-afaa-18cca3e1ec56", 00:11:42.979 "is_configured": true, 00:11:42.979 "data_offset": 0, 00:11:42.979 "data_size": 65536 00:11:42.979 } 00:11:42.979 ] 00:11:42.979 }' 00:11:42.979 20:07:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:42.979 20:07:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.249 20:07:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:11:43.249 20:07:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:43.249 20:07:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.249 20:07:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.249 [2024-12-08 20:07:15.143815] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:43.249 20:07:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.249 20:07:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:11:43.249 20:07:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:43.249 20:07:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.249 20:07:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.249 20:07:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:11:43.249 20:07:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.249 20:07:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:11:43.249 20:07:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:11:43.249 20:07:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:11:43.249 20:07:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:11:43.249 20:07:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:11:43.249 20:07:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:43.249 20:07:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:11:43.249 20:07:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:43.249 20:07:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:11:43.249 20:07:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:43.249 20:07:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:11:43.249 20:07:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:43.249 20:07:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:43.249 20:07:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:11:43.509 [2024-12-08 20:07:15.403174] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:11:43.509 /dev/nbd0 00:11:43.509 20:07:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:43.509 20:07:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:43.509 20:07:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:11:43.509 20:07:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:11:43.509 20:07:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:43.509 20:07:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:43.509 20:07:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:11:43.509 20:07:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:11:43.509 20:07:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:43.509 20:07:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:43.509 20:07:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:43.509 1+0 records in 00:11:43.509 1+0 records out 00:11:43.509 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000535303 s, 7.7 MB/s 00:11:43.509 20:07:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:43.509 20:07:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:11:43.509 20:07:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:43.509 20:07:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:43.509 20:07:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:11:43.509 20:07:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:43.509 20:07:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:43.509 20:07:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:11:43.509 20:07:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:11:43.509 20:07:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:11:47.700 65536+0 records in 00:11:47.700 65536+0 records out 00:11:47.700 33554432 bytes (34 MB, 32 MiB) copied, 3.79157 s, 8.8 MB/s 00:11:47.700 20:07:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:11:47.701 20:07:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:47.701 20:07:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:11:47.701 20:07:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:47.701 20:07:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:11:47.701 20:07:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:47.701 20:07:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:11:47.701 20:07:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:47.701 20:07:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:47.701 20:07:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:47.701 20:07:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:47.701 [2024-12-08 20:07:19.475436] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:47.701 20:07:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:47.701 20:07:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:47.701 20:07:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:11:47.701 20:07:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:11:47.701 20:07:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:11:47.701 20:07:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.701 20:07:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.701 [2024-12-08 20:07:19.495072] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:47.701 20:07:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.701 20:07:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:47.701 20:07:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:47.701 20:07:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:47.701 20:07:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:47.701 20:07:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:47.701 20:07:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:47.701 20:07:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:47.701 20:07:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:47.701 20:07:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:47.701 20:07:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:47.701 20:07:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:47.701 20:07:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:47.701 20:07:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.701 20:07:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.701 20:07:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.701 20:07:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:47.701 "name": "raid_bdev1", 00:11:47.701 "uuid": "f3413f4f-6aee-4ea0-9bdb-e563e460fcd7", 00:11:47.701 "strip_size_kb": 0, 00:11:47.701 "state": "online", 00:11:47.701 "raid_level": "raid1", 00:11:47.701 "superblock": false, 00:11:47.701 "num_base_bdevs": 2, 00:11:47.701 "num_base_bdevs_discovered": 1, 00:11:47.701 "num_base_bdevs_operational": 1, 00:11:47.701 "base_bdevs_list": [ 00:11:47.701 { 00:11:47.701 "name": null, 00:11:47.701 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:47.701 "is_configured": false, 00:11:47.701 "data_offset": 0, 00:11:47.701 "data_size": 65536 00:11:47.701 }, 00:11:47.701 { 00:11:47.701 "name": "BaseBdev2", 00:11:47.701 "uuid": "d96106a4-1a6c-5c34-afaa-18cca3e1ec56", 00:11:47.701 "is_configured": true, 00:11:47.701 "data_offset": 0, 00:11:47.701 "data_size": 65536 00:11:47.701 } 00:11:47.701 ] 00:11:47.701 }' 00:11:47.701 20:07:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:47.701 20:07:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.960 20:07:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:47.960 20:07:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.960 20:07:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.220 [2024-12-08 20:07:19.942245] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:48.220 [2024-12-08 20:07:19.959312] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09bd0 00:11:48.220 20:07:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.220 20:07:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:11:48.220 [2024-12-08 20:07:19.961281] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:49.159 20:07:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:49.159 20:07:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:49.159 20:07:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:49.159 20:07:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:49.159 20:07:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:49.159 20:07:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:49.159 20:07:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:49.159 20:07:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.159 20:07:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.159 20:07:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.159 20:07:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:49.159 "name": "raid_bdev1", 00:11:49.159 "uuid": "f3413f4f-6aee-4ea0-9bdb-e563e460fcd7", 00:11:49.159 "strip_size_kb": 0, 00:11:49.159 "state": "online", 00:11:49.159 "raid_level": "raid1", 00:11:49.159 "superblock": false, 00:11:49.159 "num_base_bdevs": 2, 00:11:49.159 "num_base_bdevs_discovered": 2, 00:11:49.159 "num_base_bdevs_operational": 2, 00:11:49.159 "process": { 00:11:49.159 "type": "rebuild", 00:11:49.159 "target": "spare", 00:11:49.159 "progress": { 00:11:49.159 "blocks": 20480, 00:11:49.159 "percent": 31 00:11:49.159 } 00:11:49.159 }, 00:11:49.159 "base_bdevs_list": [ 00:11:49.159 { 00:11:49.159 "name": "spare", 00:11:49.159 "uuid": "02c984c8-3342-5ae1-95dc-8c669b03dd14", 00:11:49.159 "is_configured": true, 00:11:49.159 "data_offset": 0, 00:11:49.159 "data_size": 65536 00:11:49.159 }, 00:11:49.159 { 00:11:49.159 "name": "BaseBdev2", 00:11:49.159 "uuid": "d96106a4-1a6c-5c34-afaa-18cca3e1ec56", 00:11:49.159 "is_configured": true, 00:11:49.159 "data_offset": 0, 00:11:49.159 "data_size": 65536 00:11:49.159 } 00:11:49.159 ] 00:11:49.159 }' 00:11:49.159 20:07:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:49.159 20:07:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:49.159 20:07:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:49.159 20:07:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:49.159 20:07:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:11:49.159 20:07:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.159 20:07:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.160 [2024-12-08 20:07:21.100915] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:49.418 [2024-12-08 20:07:21.167308] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:11:49.418 [2024-12-08 20:07:21.167452] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:49.418 [2024-12-08 20:07:21.167491] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:49.418 [2024-12-08 20:07:21.167505] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:11:49.418 20:07:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.418 20:07:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:49.418 20:07:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:49.418 20:07:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:49.418 20:07:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:49.418 20:07:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:49.418 20:07:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:49.418 20:07:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:49.418 20:07:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:49.418 20:07:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:49.418 20:07:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:49.418 20:07:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:49.418 20:07:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:49.418 20:07:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.418 20:07:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.418 20:07:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.418 20:07:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:49.418 "name": "raid_bdev1", 00:11:49.418 "uuid": "f3413f4f-6aee-4ea0-9bdb-e563e460fcd7", 00:11:49.418 "strip_size_kb": 0, 00:11:49.418 "state": "online", 00:11:49.418 "raid_level": "raid1", 00:11:49.418 "superblock": false, 00:11:49.418 "num_base_bdevs": 2, 00:11:49.418 "num_base_bdevs_discovered": 1, 00:11:49.418 "num_base_bdevs_operational": 1, 00:11:49.418 "base_bdevs_list": [ 00:11:49.418 { 00:11:49.418 "name": null, 00:11:49.418 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:49.418 "is_configured": false, 00:11:49.418 "data_offset": 0, 00:11:49.418 "data_size": 65536 00:11:49.418 }, 00:11:49.418 { 00:11:49.418 "name": "BaseBdev2", 00:11:49.418 "uuid": "d96106a4-1a6c-5c34-afaa-18cca3e1ec56", 00:11:49.418 "is_configured": true, 00:11:49.419 "data_offset": 0, 00:11:49.419 "data_size": 65536 00:11:49.419 } 00:11:49.419 ] 00:11:49.419 }' 00:11:49.419 20:07:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:49.419 20:07:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.677 20:07:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:49.677 20:07:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:49.677 20:07:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:49.677 20:07:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:49.677 20:07:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:49.677 20:07:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:49.677 20:07:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:49.677 20:07:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.677 20:07:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.677 20:07:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.677 20:07:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:49.677 "name": "raid_bdev1", 00:11:49.677 "uuid": "f3413f4f-6aee-4ea0-9bdb-e563e460fcd7", 00:11:49.677 "strip_size_kb": 0, 00:11:49.677 "state": "online", 00:11:49.677 "raid_level": "raid1", 00:11:49.677 "superblock": false, 00:11:49.677 "num_base_bdevs": 2, 00:11:49.677 "num_base_bdevs_discovered": 1, 00:11:49.677 "num_base_bdevs_operational": 1, 00:11:49.677 "base_bdevs_list": [ 00:11:49.677 { 00:11:49.677 "name": null, 00:11:49.677 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:49.677 "is_configured": false, 00:11:49.677 "data_offset": 0, 00:11:49.677 "data_size": 65536 00:11:49.677 }, 00:11:49.677 { 00:11:49.677 "name": "BaseBdev2", 00:11:49.677 "uuid": "d96106a4-1a6c-5c34-afaa-18cca3e1ec56", 00:11:49.677 "is_configured": true, 00:11:49.677 "data_offset": 0, 00:11:49.677 "data_size": 65536 00:11:49.677 } 00:11:49.677 ] 00:11:49.677 }' 00:11:49.677 20:07:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:49.935 20:07:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:49.935 20:07:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:49.935 20:07:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:49.935 20:07:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:49.935 20:07:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.936 20:07:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.936 [2024-12-08 20:07:21.755123] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:49.936 [2024-12-08 20:07:21.771124] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09ca0 00:11:49.936 20:07:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.936 20:07:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:11:49.936 [2024-12-08 20:07:21.773075] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:50.874 20:07:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:50.874 20:07:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:50.874 20:07:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:50.874 20:07:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:50.874 20:07:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:50.874 20:07:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:50.874 20:07:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:50.874 20:07:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.874 20:07:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.874 20:07:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.874 20:07:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:50.874 "name": "raid_bdev1", 00:11:50.874 "uuid": "f3413f4f-6aee-4ea0-9bdb-e563e460fcd7", 00:11:50.874 "strip_size_kb": 0, 00:11:50.874 "state": "online", 00:11:50.874 "raid_level": "raid1", 00:11:50.874 "superblock": false, 00:11:50.874 "num_base_bdevs": 2, 00:11:50.874 "num_base_bdevs_discovered": 2, 00:11:50.874 "num_base_bdevs_operational": 2, 00:11:50.874 "process": { 00:11:50.874 "type": "rebuild", 00:11:50.874 "target": "spare", 00:11:50.874 "progress": { 00:11:50.874 "blocks": 20480, 00:11:50.874 "percent": 31 00:11:50.874 } 00:11:50.874 }, 00:11:50.874 "base_bdevs_list": [ 00:11:50.874 { 00:11:50.874 "name": "spare", 00:11:50.874 "uuid": "02c984c8-3342-5ae1-95dc-8c669b03dd14", 00:11:50.874 "is_configured": true, 00:11:50.874 "data_offset": 0, 00:11:50.874 "data_size": 65536 00:11:50.874 }, 00:11:50.874 { 00:11:50.874 "name": "BaseBdev2", 00:11:50.874 "uuid": "d96106a4-1a6c-5c34-afaa-18cca3e1ec56", 00:11:50.874 "is_configured": true, 00:11:50.874 "data_offset": 0, 00:11:50.874 "data_size": 65536 00:11:50.874 } 00:11:50.874 ] 00:11:50.874 }' 00:11:50.874 20:07:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:51.135 20:07:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:51.135 20:07:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:51.135 20:07:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:51.135 20:07:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:11:51.135 20:07:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:11:51.135 20:07:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:11:51.135 20:07:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:11:51.135 20:07:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=364 00:11:51.135 20:07:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:51.135 20:07:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:51.135 20:07:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:51.135 20:07:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:51.135 20:07:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:51.135 20:07:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:51.135 20:07:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:51.135 20:07:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:51.135 20:07:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.135 20:07:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.135 20:07:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.135 20:07:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:51.135 "name": "raid_bdev1", 00:11:51.135 "uuid": "f3413f4f-6aee-4ea0-9bdb-e563e460fcd7", 00:11:51.135 "strip_size_kb": 0, 00:11:51.135 "state": "online", 00:11:51.135 "raid_level": "raid1", 00:11:51.135 "superblock": false, 00:11:51.135 "num_base_bdevs": 2, 00:11:51.135 "num_base_bdevs_discovered": 2, 00:11:51.135 "num_base_bdevs_operational": 2, 00:11:51.135 "process": { 00:11:51.135 "type": "rebuild", 00:11:51.135 "target": "spare", 00:11:51.135 "progress": { 00:11:51.135 "blocks": 22528, 00:11:51.135 "percent": 34 00:11:51.135 } 00:11:51.135 }, 00:11:51.135 "base_bdevs_list": [ 00:11:51.135 { 00:11:51.135 "name": "spare", 00:11:51.135 "uuid": "02c984c8-3342-5ae1-95dc-8c669b03dd14", 00:11:51.135 "is_configured": true, 00:11:51.135 "data_offset": 0, 00:11:51.135 "data_size": 65536 00:11:51.135 }, 00:11:51.135 { 00:11:51.135 "name": "BaseBdev2", 00:11:51.135 "uuid": "d96106a4-1a6c-5c34-afaa-18cca3e1ec56", 00:11:51.135 "is_configured": true, 00:11:51.135 "data_offset": 0, 00:11:51.135 "data_size": 65536 00:11:51.135 } 00:11:51.135 ] 00:11:51.135 }' 00:11:51.135 20:07:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:51.135 20:07:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:51.135 20:07:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:51.135 20:07:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:51.135 20:07:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:52.518 20:07:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:52.518 20:07:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:52.518 20:07:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:52.518 20:07:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:52.518 20:07:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:52.518 20:07:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:52.518 20:07:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:52.518 20:07:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.518 20:07:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:52.518 20:07:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.518 20:07:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.518 20:07:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:52.518 "name": "raid_bdev1", 00:11:52.518 "uuid": "f3413f4f-6aee-4ea0-9bdb-e563e460fcd7", 00:11:52.518 "strip_size_kb": 0, 00:11:52.518 "state": "online", 00:11:52.518 "raid_level": "raid1", 00:11:52.518 "superblock": false, 00:11:52.518 "num_base_bdevs": 2, 00:11:52.518 "num_base_bdevs_discovered": 2, 00:11:52.518 "num_base_bdevs_operational": 2, 00:11:52.518 "process": { 00:11:52.518 "type": "rebuild", 00:11:52.518 "target": "spare", 00:11:52.518 "progress": { 00:11:52.518 "blocks": 47104, 00:11:52.519 "percent": 71 00:11:52.519 } 00:11:52.519 }, 00:11:52.519 "base_bdevs_list": [ 00:11:52.519 { 00:11:52.519 "name": "spare", 00:11:52.519 "uuid": "02c984c8-3342-5ae1-95dc-8c669b03dd14", 00:11:52.519 "is_configured": true, 00:11:52.519 "data_offset": 0, 00:11:52.519 "data_size": 65536 00:11:52.519 }, 00:11:52.519 { 00:11:52.519 "name": "BaseBdev2", 00:11:52.519 "uuid": "d96106a4-1a6c-5c34-afaa-18cca3e1ec56", 00:11:52.519 "is_configured": true, 00:11:52.519 "data_offset": 0, 00:11:52.519 "data_size": 65536 00:11:52.519 } 00:11:52.519 ] 00:11:52.519 }' 00:11:52.519 20:07:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:52.519 20:07:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:52.519 20:07:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:52.519 20:07:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:52.519 20:07:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:53.090 [2024-12-08 20:07:24.987981] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:11:53.090 [2024-12-08 20:07:24.988153] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:11:53.090 [2024-12-08 20:07:24.988242] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:53.348 20:07:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:53.348 20:07:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:53.348 20:07:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:53.348 20:07:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:53.348 20:07:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:53.348 20:07:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:53.349 20:07:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:53.349 20:07:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.349 20:07:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:53.349 20:07:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.349 20:07:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.349 20:07:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:53.349 "name": "raid_bdev1", 00:11:53.349 "uuid": "f3413f4f-6aee-4ea0-9bdb-e563e460fcd7", 00:11:53.349 "strip_size_kb": 0, 00:11:53.349 "state": "online", 00:11:53.349 "raid_level": "raid1", 00:11:53.349 "superblock": false, 00:11:53.349 "num_base_bdevs": 2, 00:11:53.349 "num_base_bdevs_discovered": 2, 00:11:53.349 "num_base_bdevs_operational": 2, 00:11:53.349 "base_bdevs_list": [ 00:11:53.349 { 00:11:53.349 "name": "spare", 00:11:53.349 "uuid": "02c984c8-3342-5ae1-95dc-8c669b03dd14", 00:11:53.349 "is_configured": true, 00:11:53.349 "data_offset": 0, 00:11:53.349 "data_size": 65536 00:11:53.349 }, 00:11:53.349 { 00:11:53.349 "name": "BaseBdev2", 00:11:53.349 "uuid": "d96106a4-1a6c-5c34-afaa-18cca3e1ec56", 00:11:53.349 "is_configured": true, 00:11:53.349 "data_offset": 0, 00:11:53.349 "data_size": 65536 00:11:53.349 } 00:11:53.349 ] 00:11:53.349 }' 00:11:53.349 20:07:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:53.349 20:07:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:11:53.349 20:07:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:53.608 20:07:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:11:53.608 20:07:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:11:53.608 20:07:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:53.608 20:07:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:53.608 20:07:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:53.608 20:07:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:53.608 20:07:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:53.608 20:07:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:53.608 20:07:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:53.608 20:07:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.608 20:07:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.608 20:07:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.608 20:07:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:53.608 "name": "raid_bdev1", 00:11:53.608 "uuid": "f3413f4f-6aee-4ea0-9bdb-e563e460fcd7", 00:11:53.608 "strip_size_kb": 0, 00:11:53.608 "state": "online", 00:11:53.608 "raid_level": "raid1", 00:11:53.608 "superblock": false, 00:11:53.608 "num_base_bdevs": 2, 00:11:53.608 "num_base_bdevs_discovered": 2, 00:11:53.608 "num_base_bdevs_operational": 2, 00:11:53.608 "base_bdevs_list": [ 00:11:53.608 { 00:11:53.608 "name": "spare", 00:11:53.608 "uuid": "02c984c8-3342-5ae1-95dc-8c669b03dd14", 00:11:53.608 "is_configured": true, 00:11:53.608 "data_offset": 0, 00:11:53.608 "data_size": 65536 00:11:53.608 }, 00:11:53.608 { 00:11:53.608 "name": "BaseBdev2", 00:11:53.608 "uuid": "d96106a4-1a6c-5c34-afaa-18cca3e1ec56", 00:11:53.608 "is_configured": true, 00:11:53.608 "data_offset": 0, 00:11:53.608 "data_size": 65536 00:11:53.608 } 00:11:53.608 ] 00:11:53.608 }' 00:11:53.608 20:07:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:53.608 20:07:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:53.608 20:07:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:53.608 20:07:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:53.608 20:07:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:53.608 20:07:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:53.608 20:07:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:53.608 20:07:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:53.608 20:07:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:53.608 20:07:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:53.608 20:07:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:53.608 20:07:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:53.608 20:07:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:53.608 20:07:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:53.608 20:07:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:53.608 20:07:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.608 20:07:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:53.608 20:07:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.608 20:07:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.608 20:07:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:53.608 "name": "raid_bdev1", 00:11:53.608 "uuid": "f3413f4f-6aee-4ea0-9bdb-e563e460fcd7", 00:11:53.608 "strip_size_kb": 0, 00:11:53.608 "state": "online", 00:11:53.608 "raid_level": "raid1", 00:11:53.608 "superblock": false, 00:11:53.608 "num_base_bdevs": 2, 00:11:53.608 "num_base_bdevs_discovered": 2, 00:11:53.608 "num_base_bdevs_operational": 2, 00:11:53.608 "base_bdevs_list": [ 00:11:53.608 { 00:11:53.608 "name": "spare", 00:11:53.608 "uuid": "02c984c8-3342-5ae1-95dc-8c669b03dd14", 00:11:53.608 "is_configured": true, 00:11:53.608 "data_offset": 0, 00:11:53.608 "data_size": 65536 00:11:53.608 }, 00:11:53.608 { 00:11:53.608 "name": "BaseBdev2", 00:11:53.608 "uuid": "d96106a4-1a6c-5c34-afaa-18cca3e1ec56", 00:11:53.608 "is_configured": true, 00:11:53.608 "data_offset": 0, 00:11:53.608 "data_size": 65536 00:11:53.608 } 00:11:53.608 ] 00:11:53.608 }' 00:11:53.608 20:07:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:53.608 20:07:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.178 20:07:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:54.178 20:07:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.178 20:07:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.178 [2024-12-08 20:07:25.917780] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:54.178 [2024-12-08 20:07:25.917870] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:54.178 [2024-12-08 20:07:25.918061] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:54.178 [2024-12-08 20:07:25.918192] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:54.178 [2024-12-08 20:07:25.918237] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:11:54.178 20:07:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.178 20:07:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:11:54.178 20:07:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:54.178 20:07:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.178 20:07:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.178 20:07:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.178 20:07:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:11:54.178 20:07:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:11:54.178 20:07:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:11:54.178 20:07:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:11:54.178 20:07:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:54.178 20:07:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:11:54.178 20:07:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:54.178 20:07:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:54.178 20:07:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:54.178 20:07:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:11:54.178 20:07:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:54.178 20:07:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:54.178 20:07:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:11:54.437 /dev/nbd0 00:11:54.438 20:07:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:54.438 20:07:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:54.438 20:07:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:11:54.438 20:07:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:11:54.438 20:07:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:54.438 20:07:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:54.438 20:07:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:11:54.438 20:07:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:11:54.438 20:07:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:54.438 20:07:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:54.438 20:07:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:54.438 1+0 records in 00:11:54.438 1+0 records out 00:11:54.438 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000210421 s, 19.5 MB/s 00:11:54.438 20:07:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:54.438 20:07:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:11:54.438 20:07:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:54.438 20:07:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:54.438 20:07:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:11:54.438 20:07:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:54.438 20:07:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:54.438 20:07:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:11:54.713 /dev/nbd1 00:11:54.713 20:07:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:54.713 20:07:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:54.713 20:07:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:11:54.713 20:07:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:11:54.713 20:07:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:54.713 20:07:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:54.713 20:07:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:11:54.713 20:07:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:11:54.713 20:07:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:54.713 20:07:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:54.713 20:07:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:54.713 1+0 records in 00:11:54.713 1+0 records out 00:11:54.713 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000287247 s, 14.3 MB/s 00:11:54.713 20:07:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:54.713 20:07:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:11:54.713 20:07:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:54.713 20:07:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:54.713 20:07:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:11:54.713 20:07:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:54.713 20:07:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:54.713 20:07:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:11:54.713 20:07:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:11:54.713 20:07:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:54.713 20:07:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:54.713 20:07:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:54.713 20:07:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:11:54.713 20:07:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:54.713 20:07:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:11:54.989 20:07:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:54.989 20:07:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:54.989 20:07:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:54.989 20:07:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:54.989 20:07:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:54.989 20:07:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:54.989 20:07:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:11:54.989 20:07:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:11:54.989 20:07:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:54.989 20:07:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:11:55.250 20:07:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:55.250 20:07:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:55.250 20:07:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:55.250 20:07:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:55.250 20:07:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:55.250 20:07:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:55.250 20:07:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:11:55.250 20:07:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:11:55.250 20:07:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:11:55.250 20:07:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 75078 00:11:55.250 20:07:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 75078 ']' 00:11:55.250 20:07:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 75078 00:11:55.250 20:07:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:11:55.250 20:07:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:55.250 20:07:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75078 00:11:55.250 killing process with pid 75078 00:11:55.250 Received shutdown signal, test time was about 60.000000 seconds 00:11:55.250 00:11:55.250 Latency(us) 00:11:55.250 [2024-12-08T20:07:27.228Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:55.250 [2024-12-08T20:07:27.228Z] =================================================================================================================== 00:11:55.250 [2024-12-08T20:07:27.228Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:11:55.250 20:07:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:55.250 20:07:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:55.250 20:07:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75078' 00:11:55.250 20:07:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@973 -- # kill 75078 00:11:55.250 [2024-12-08 20:07:27.112334] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:55.250 20:07:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@978 -- # wait 75078 00:11:55.511 [2024-12-08 20:07:27.406916] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:56.891 20:07:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:11:56.891 00:11:56.891 real 0m14.884s 00:11:56.892 user 0m16.998s 00:11:56.892 sys 0m2.804s 00:11:56.892 ************************************ 00:11:56.892 END TEST raid_rebuild_test 00:11:56.892 ************************************ 00:11:56.892 20:07:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:56.892 20:07:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.892 20:07:28 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false true 00:11:56.892 20:07:28 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:11:56.892 20:07:28 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:56.892 20:07:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:56.892 ************************************ 00:11:56.892 START TEST raid_rebuild_test_sb 00:11:56.892 ************************************ 00:11:56.892 20:07:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:11:56.892 20:07:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:11:56.892 20:07:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:11:56.892 20:07:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:11:56.892 20:07:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:11:56.892 20:07:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:11:56.892 20:07:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:11:56.892 20:07:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:56.892 20:07:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:11:56.892 20:07:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:56.892 20:07:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:56.892 20:07:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:11:56.892 20:07:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:56.892 20:07:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:56.892 20:07:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:56.892 20:07:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:11:56.892 20:07:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:11:56.892 20:07:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:11:56.892 20:07:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:11:56.892 20:07:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:11:56.892 20:07:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:11:56.892 20:07:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:11:56.892 20:07:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:11:56.892 20:07:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:11:56.892 20:07:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:11:56.892 20:07:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=75486 00:11:56.892 20:07:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 75486 00:11:56.892 20:07:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:11:56.892 20:07:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 75486 ']' 00:11:56.892 20:07:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:56.892 20:07:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:56.892 20:07:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:56.892 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:56.892 20:07:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:56.892 20:07:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.892 [2024-12-08 20:07:28.681472] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:11:56.892 [2024-12-08 20:07:28.681663] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75486 ] 00:11:56.892 I/O size of 3145728 is greater than zero copy threshold (65536). 00:11:56.892 Zero copy mechanism will not be used. 00:11:56.892 [2024-12-08 20:07:28.854316] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:57.151 [2024-12-08 20:07:28.967277] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:57.410 [2024-12-08 20:07:29.160640] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:57.410 [2024-12-08 20:07:29.160698] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:57.671 20:07:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:57.671 20:07:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:11:57.671 20:07:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:57.671 20:07:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:57.671 20:07:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.671 20:07:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.671 BaseBdev1_malloc 00:11:57.671 20:07:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.671 20:07:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:11:57.671 20:07:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.671 20:07:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.671 [2024-12-08 20:07:29.553983] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:11:57.671 [2024-12-08 20:07:29.554040] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:57.671 [2024-12-08 20:07:29.554078] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:57.671 [2024-12-08 20:07:29.554089] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:57.671 [2024-12-08 20:07:29.556130] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:57.671 [2024-12-08 20:07:29.556169] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:57.671 BaseBdev1 00:11:57.671 20:07:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.671 20:07:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:57.671 20:07:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:57.671 20:07:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.671 20:07:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.671 BaseBdev2_malloc 00:11:57.671 20:07:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.671 20:07:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:11:57.671 20:07:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.671 20:07:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.671 [2024-12-08 20:07:29.608136] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:11:57.671 [2024-12-08 20:07:29.608194] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:57.671 [2024-12-08 20:07:29.608216] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:57.671 [2024-12-08 20:07:29.608226] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:57.671 [2024-12-08 20:07:29.610271] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:57.671 [2024-12-08 20:07:29.610321] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:57.671 BaseBdev2 00:11:57.671 20:07:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.671 20:07:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:11:57.671 20:07:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.671 20:07:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.931 spare_malloc 00:11:57.931 20:07:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.931 20:07:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:11:57.932 20:07:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.932 20:07:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.932 spare_delay 00:11:57.932 20:07:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.932 20:07:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:11:57.932 20:07:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.932 20:07:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.932 [2024-12-08 20:07:29.688310] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:11:57.932 [2024-12-08 20:07:29.688372] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:57.932 [2024-12-08 20:07:29.688391] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:11:57.932 [2024-12-08 20:07:29.688401] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:57.932 [2024-12-08 20:07:29.690439] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:57.932 [2024-12-08 20:07:29.690514] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:11:57.932 spare 00:11:57.932 20:07:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.932 20:07:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:11:57.932 20:07:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.932 20:07:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.932 [2024-12-08 20:07:29.700347] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:57.932 [2024-12-08 20:07:29.702064] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:57.932 [2024-12-08 20:07:29.702236] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:57.932 [2024-12-08 20:07:29.702251] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:57.932 [2024-12-08 20:07:29.702471] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:57.932 [2024-12-08 20:07:29.702617] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:57.932 [2024-12-08 20:07:29.702625] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:11:57.932 [2024-12-08 20:07:29.702768] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:57.932 20:07:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.932 20:07:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:57.932 20:07:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:57.932 20:07:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:57.932 20:07:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:57.932 20:07:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:57.932 20:07:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:57.932 20:07:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:57.932 20:07:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:57.932 20:07:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:57.932 20:07:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:57.932 20:07:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.932 20:07:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:57.932 20:07:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.932 20:07:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.932 20:07:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.932 20:07:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:57.932 "name": "raid_bdev1", 00:11:57.932 "uuid": "1ed92630-afef-4c90-a95f-05e09f12a3e8", 00:11:57.932 "strip_size_kb": 0, 00:11:57.932 "state": "online", 00:11:57.932 "raid_level": "raid1", 00:11:57.932 "superblock": true, 00:11:57.932 "num_base_bdevs": 2, 00:11:57.932 "num_base_bdevs_discovered": 2, 00:11:57.932 "num_base_bdevs_operational": 2, 00:11:57.932 "base_bdevs_list": [ 00:11:57.932 { 00:11:57.932 "name": "BaseBdev1", 00:11:57.932 "uuid": "3ef7b053-00f3-51ef-bc39-af61f7709dc2", 00:11:57.932 "is_configured": true, 00:11:57.932 "data_offset": 2048, 00:11:57.932 "data_size": 63488 00:11:57.932 }, 00:11:57.932 { 00:11:57.932 "name": "BaseBdev2", 00:11:57.932 "uuid": "5b7ddd3c-4750-57a8-aeac-a6416b54cd5e", 00:11:57.932 "is_configured": true, 00:11:57.932 "data_offset": 2048, 00:11:57.932 "data_size": 63488 00:11:57.932 } 00:11:57.932 ] 00:11:57.932 }' 00:11:57.932 20:07:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:57.932 20:07:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.501 20:07:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:58.501 20:07:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.501 20:07:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:11:58.501 20:07:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.501 [2024-12-08 20:07:30.191806] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:58.501 20:07:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.501 20:07:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:11:58.501 20:07:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.501 20:07:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.501 20:07:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.501 20:07:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:11:58.501 20:07:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.501 20:07:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:11:58.501 20:07:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:11:58.501 20:07:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:11:58.501 20:07:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:11:58.501 20:07:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:11:58.501 20:07:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:58.501 20:07:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:11:58.501 20:07:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:58.501 20:07:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:11:58.501 20:07:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:58.501 20:07:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:11:58.501 20:07:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:58.501 20:07:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:58.502 20:07:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:11:58.502 [2024-12-08 20:07:30.451164] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:11:58.502 /dev/nbd0 00:11:58.762 20:07:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:58.762 20:07:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:58.762 20:07:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:11:58.762 20:07:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:11:58.762 20:07:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:58.762 20:07:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:58.762 20:07:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:11:58.762 20:07:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:11:58.762 20:07:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:58.762 20:07:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:58.762 20:07:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:58.762 1+0 records in 00:11:58.762 1+0 records out 00:11:58.762 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00042532 s, 9.6 MB/s 00:11:58.762 20:07:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:58.762 20:07:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:11:58.762 20:07:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:58.762 20:07:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:58.762 20:07:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:11:58.762 20:07:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:58.762 20:07:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:58.762 20:07:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:11:58.762 20:07:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:11:58.762 20:07:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:12:02.957 63488+0 records in 00:12:02.957 63488+0 records out 00:12:02.958 32505856 bytes (33 MB, 31 MiB) copied, 3.77476 s, 8.6 MB/s 00:12:02.958 20:07:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:02.958 20:07:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:02.958 20:07:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:02.958 20:07:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:02.958 20:07:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:12:02.958 20:07:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:02.958 20:07:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:02.958 20:07:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:02.958 [2024-12-08 20:07:34.524332] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:02.958 20:07:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:02.958 20:07:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:02.958 20:07:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:02.958 20:07:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:02.958 20:07:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:02.958 20:07:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:12:02.958 20:07:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:12:02.958 20:07:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:02.958 20:07:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.958 20:07:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.958 [2024-12-08 20:07:34.540772] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:02.958 20:07:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.958 20:07:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:02.958 20:07:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:02.958 20:07:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:02.958 20:07:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:02.958 20:07:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:02.958 20:07:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:02.958 20:07:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:02.958 20:07:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:02.958 20:07:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:02.958 20:07:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:02.958 20:07:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:02.958 20:07:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:02.958 20:07:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.958 20:07:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.958 20:07:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.958 20:07:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:02.958 "name": "raid_bdev1", 00:12:02.958 "uuid": "1ed92630-afef-4c90-a95f-05e09f12a3e8", 00:12:02.958 "strip_size_kb": 0, 00:12:02.958 "state": "online", 00:12:02.958 "raid_level": "raid1", 00:12:02.958 "superblock": true, 00:12:02.958 "num_base_bdevs": 2, 00:12:02.958 "num_base_bdevs_discovered": 1, 00:12:02.958 "num_base_bdevs_operational": 1, 00:12:02.958 "base_bdevs_list": [ 00:12:02.958 { 00:12:02.958 "name": null, 00:12:02.958 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:02.958 "is_configured": false, 00:12:02.958 "data_offset": 0, 00:12:02.958 "data_size": 63488 00:12:02.958 }, 00:12:02.958 { 00:12:02.958 "name": "BaseBdev2", 00:12:02.958 "uuid": "5b7ddd3c-4750-57a8-aeac-a6416b54cd5e", 00:12:02.958 "is_configured": true, 00:12:02.958 "data_offset": 2048, 00:12:02.958 "data_size": 63488 00:12:02.958 } 00:12:02.958 ] 00:12:02.958 }' 00:12:02.958 20:07:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:02.958 20:07:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.217 20:07:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:03.217 20:07:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.217 20:07:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.217 [2024-12-08 20:07:34.996019] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:03.217 [2024-12-08 20:07:35.012604] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3360 00:12:03.217 20:07:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.217 [2024-12-08 20:07:35.014522] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:03.217 20:07:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:04.153 20:07:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:04.153 20:07:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:04.153 20:07:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:04.153 20:07:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:04.153 20:07:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:04.153 20:07:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.153 20:07:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:04.153 20:07:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.153 20:07:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.153 20:07:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.153 20:07:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:04.153 "name": "raid_bdev1", 00:12:04.153 "uuid": "1ed92630-afef-4c90-a95f-05e09f12a3e8", 00:12:04.153 "strip_size_kb": 0, 00:12:04.153 "state": "online", 00:12:04.153 "raid_level": "raid1", 00:12:04.153 "superblock": true, 00:12:04.153 "num_base_bdevs": 2, 00:12:04.153 "num_base_bdevs_discovered": 2, 00:12:04.153 "num_base_bdevs_operational": 2, 00:12:04.153 "process": { 00:12:04.153 "type": "rebuild", 00:12:04.153 "target": "spare", 00:12:04.153 "progress": { 00:12:04.153 "blocks": 20480, 00:12:04.153 "percent": 32 00:12:04.153 } 00:12:04.153 }, 00:12:04.153 "base_bdevs_list": [ 00:12:04.153 { 00:12:04.153 "name": "spare", 00:12:04.153 "uuid": "86443d35-49b8-5e22-8284-6c14711c97f8", 00:12:04.153 "is_configured": true, 00:12:04.153 "data_offset": 2048, 00:12:04.153 "data_size": 63488 00:12:04.153 }, 00:12:04.153 { 00:12:04.153 "name": "BaseBdev2", 00:12:04.153 "uuid": "5b7ddd3c-4750-57a8-aeac-a6416b54cd5e", 00:12:04.153 "is_configured": true, 00:12:04.153 "data_offset": 2048, 00:12:04.153 "data_size": 63488 00:12:04.153 } 00:12:04.153 ] 00:12:04.153 }' 00:12:04.154 20:07:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:04.154 20:07:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:04.154 20:07:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:04.414 20:07:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:04.414 20:07:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:04.414 20:07:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.414 20:07:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.414 [2024-12-08 20:07:36.153840] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:04.414 [2024-12-08 20:07:36.219879] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:04.414 [2024-12-08 20:07:36.220023] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:04.414 [2024-12-08 20:07:36.220062] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:04.414 [2024-12-08 20:07:36.220090] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:04.414 20:07:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.414 20:07:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:04.414 20:07:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:04.414 20:07:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:04.414 20:07:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:04.414 20:07:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:04.414 20:07:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:04.414 20:07:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:04.414 20:07:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:04.414 20:07:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:04.414 20:07:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:04.414 20:07:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.414 20:07:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.414 20:07:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.414 20:07:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:04.414 20:07:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.414 20:07:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:04.414 "name": "raid_bdev1", 00:12:04.414 "uuid": "1ed92630-afef-4c90-a95f-05e09f12a3e8", 00:12:04.414 "strip_size_kb": 0, 00:12:04.414 "state": "online", 00:12:04.414 "raid_level": "raid1", 00:12:04.414 "superblock": true, 00:12:04.414 "num_base_bdevs": 2, 00:12:04.414 "num_base_bdevs_discovered": 1, 00:12:04.414 "num_base_bdevs_operational": 1, 00:12:04.414 "base_bdevs_list": [ 00:12:04.414 { 00:12:04.414 "name": null, 00:12:04.414 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:04.414 "is_configured": false, 00:12:04.414 "data_offset": 0, 00:12:04.414 "data_size": 63488 00:12:04.414 }, 00:12:04.414 { 00:12:04.414 "name": "BaseBdev2", 00:12:04.414 "uuid": "5b7ddd3c-4750-57a8-aeac-a6416b54cd5e", 00:12:04.414 "is_configured": true, 00:12:04.414 "data_offset": 2048, 00:12:04.414 "data_size": 63488 00:12:04.414 } 00:12:04.414 ] 00:12:04.414 }' 00:12:04.414 20:07:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:04.414 20:07:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.984 20:07:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:04.984 20:07:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:04.984 20:07:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:04.984 20:07:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:04.984 20:07:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:04.984 20:07:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:04.984 20:07:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.984 20:07:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.984 20:07:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.984 20:07:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.984 20:07:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:04.984 "name": "raid_bdev1", 00:12:04.984 "uuid": "1ed92630-afef-4c90-a95f-05e09f12a3e8", 00:12:04.984 "strip_size_kb": 0, 00:12:04.984 "state": "online", 00:12:04.984 "raid_level": "raid1", 00:12:04.984 "superblock": true, 00:12:04.984 "num_base_bdevs": 2, 00:12:04.984 "num_base_bdevs_discovered": 1, 00:12:04.984 "num_base_bdevs_operational": 1, 00:12:04.984 "base_bdevs_list": [ 00:12:04.984 { 00:12:04.984 "name": null, 00:12:04.984 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:04.984 "is_configured": false, 00:12:04.984 "data_offset": 0, 00:12:04.984 "data_size": 63488 00:12:04.984 }, 00:12:04.984 { 00:12:04.984 "name": "BaseBdev2", 00:12:04.984 "uuid": "5b7ddd3c-4750-57a8-aeac-a6416b54cd5e", 00:12:04.984 "is_configured": true, 00:12:04.984 "data_offset": 2048, 00:12:04.984 "data_size": 63488 00:12:04.984 } 00:12:04.984 ] 00:12:04.984 }' 00:12:04.984 20:07:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:04.984 20:07:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:04.984 20:07:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:04.984 20:07:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:04.984 20:07:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:04.984 20:07:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.984 20:07:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.984 [2024-12-08 20:07:36.863013] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:04.984 [2024-12-08 20:07:36.879484] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3430 00:12:04.984 20:07:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.984 20:07:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:04.984 [2024-12-08 20:07:36.881340] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:05.923 20:07:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:05.923 20:07:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:05.923 20:07:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:05.923 20:07:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:05.923 20:07:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:05.923 20:07:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:05.923 20:07:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.923 20:07:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:05.923 20:07:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.183 20:07:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.183 20:07:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:06.183 "name": "raid_bdev1", 00:12:06.183 "uuid": "1ed92630-afef-4c90-a95f-05e09f12a3e8", 00:12:06.183 "strip_size_kb": 0, 00:12:06.183 "state": "online", 00:12:06.183 "raid_level": "raid1", 00:12:06.183 "superblock": true, 00:12:06.183 "num_base_bdevs": 2, 00:12:06.183 "num_base_bdevs_discovered": 2, 00:12:06.183 "num_base_bdevs_operational": 2, 00:12:06.183 "process": { 00:12:06.183 "type": "rebuild", 00:12:06.183 "target": "spare", 00:12:06.183 "progress": { 00:12:06.183 "blocks": 20480, 00:12:06.183 "percent": 32 00:12:06.183 } 00:12:06.183 }, 00:12:06.183 "base_bdevs_list": [ 00:12:06.183 { 00:12:06.183 "name": "spare", 00:12:06.183 "uuid": "86443d35-49b8-5e22-8284-6c14711c97f8", 00:12:06.183 "is_configured": true, 00:12:06.183 "data_offset": 2048, 00:12:06.183 "data_size": 63488 00:12:06.183 }, 00:12:06.183 { 00:12:06.183 "name": "BaseBdev2", 00:12:06.183 "uuid": "5b7ddd3c-4750-57a8-aeac-a6416b54cd5e", 00:12:06.183 "is_configured": true, 00:12:06.183 "data_offset": 2048, 00:12:06.183 "data_size": 63488 00:12:06.183 } 00:12:06.183 ] 00:12:06.183 }' 00:12:06.183 20:07:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:06.183 20:07:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:06.183 20:07:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:06.184 20:07:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:06.184 20:07:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:12:06.184 20:07:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:12:06.184 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:12:06.184 20:07:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:12:06.184 20:07:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:06.184 20:07:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:12:06.184 20:07:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=380 00:12:06.184 20:07:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:06.184 20:07:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:06.184 20:07:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:06.184 20:07:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:06.184 20:07:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:06.184 20:07:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:06.184 20:07:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:06.184 20:07:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:06.184 20:07:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.184 20:07:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.184 20:07:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.184 20:07:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:06.184 "name": "raid_bdev1", 00:12:06.184 "uuid": "1ed92630-afef-4c90-a95f-05e09f12a3e8", 00:12:06.184 "strip_size_kb": 0, 00:12:06.184 "state": "online", 00:12:06.184 "raid_level": "raid1", 00:12:06.184 "superblock": true, 00:12:06.184 "num_base_bdevs": 2, 00:12:06.184 "num_base_bdevs_discovered": 2, 00:12:06.184 "num_base_bdevs_operational": 2, 00:12:06.184 "process": { 00:12:06.184 "type": "rebuild", 00:12:06.184 "target": "spare", 00:12:06.184 "progress": { 00:12:06.184 "blocks": 22528, 00:12:06.184 "percent": 35 00:12:06.184 } 00:12:06.184 }, 00:12:06.184 "base_bdevs_list": [ 00:12:06.184 { 00:12:06.184 "name": "spare", 00:12:06.184 "uuid": "86443d35-49b8-5e22-8284-6c14711c97f8", 00:12:06.184 "is_configured": true, 00:12:06.184 "data_offset": 2048, 00:12:06.184 "data_size": 63488 00:12:06.184 }, 00:12:06.184 { 00:12:06.184 "name": "BaseBdev2", 00:12:06.184 "uuid": "5b7ddd3c-4750-57a8-aeac-a6416b54cd5e", 00:12:06.184 "is_configured": true, 00:12:06.184 "data_offset": 2048, 00:12:06.184 "data_size": 63488 00:12:06.184 } 00:12:06.184 ] 00:12:06.184 }' 00:12:06.184 20:07:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:06.184 20:07:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:06.184 20:07:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:06.444 20:07:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:06.444 20:07:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:07.383 20:07:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:07.383 20:07:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:07.383 20:07:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:07.383 20:07:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:07.384 20:07:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:07.384 20:07:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:07.384 20:07:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.384 20:07:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:07.384 20:07:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.384 20:07:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.384 20:07:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.384 20:07:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:07.384 "name": "raid_bdev1", 00:12:07.384 "uuid": "1ed92630-afef-4c90-a95f-05e09f12a3e8", 00:12:07.384 "strip_size_kb": 0, 00:12:07.384 "state": "online", 00:12:07.384 "raid_level": "raid1", 00:12:07.384 "superblock": true, 00:12:07.384 "num_base_bdevs": 2, 00:12:07.384 "num_base_bdevs_discovered": 2, 00:12:07.384 "num_base_bdevs_operational": 2, 00:12:07.384 "process": { 00:12:07.384 "type": "rebuild", 00:12:07.384 "target": "spare", 00:12:07.384 "progress": { 00:12:07.384 "blocks": 47104, 00:12:07.384 "percent": 74 00:12:07.384 } 00:12:07.384 }, 00:12:07.384 "base_bdevs_list": [ 00:12:07.384 { 00:12:07.384 "name": "spare", 00:12:07.384 "uuid": "86443d35-49b8-5e22-8284-6c14711c97f8", 00:12:07.384 "is_configured": true, 00:12:07.384 "data_offset": 2048, 00:12:07.384 "data_size": 63488 00:12:07.384 }, 00:12:07.384 { 00:12:07.384 "name": "BaseBdev2", 00:12:07.384 "uuid": "5b7ddd3c-4750-57a8-aeac-a6416b54cd5e", 00:12:07.384 "is_configured": true, 00:12:07.384 "data_offset": 2048, 00:12:07.384 "data_size": 63488 00:12:07.384 } 00:12:07.384 ] 00:12:07.384 }' 00:12:07.384 20:07:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:07.384 20:07:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:07.384 20:07:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:07.384 20:07:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:07.384 20:07:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:08.324 [2024-12-08 20:07:39.994191] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:08.324 [2024-12-08 20:07:39.994316] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:08.324 [2024-12-08 20:07:39.994458] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:08.584 20:07:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:08.584 20:07:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:08.584 20:07:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:08.584 20:07:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:08.584 20:07:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:08.584 20:07:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:08.584 20:07:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.584 20:07:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.584 20:07:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:08.584 20:07:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.584 20:07:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.584 20:07:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:08.584 "name": "raid_bdev1", 00:12:08.584 "uuid": "1ed92630-afef-4c90-a95f-05e09f12a3e8", 00:12:08.584 "strip_size_kb": 0, 00:12:08.584 "state": "online", 00:12:08.584 "raid_level": "raid1", 00:12:08.584 "superblock": true, 00:12:08.584 "num_base_bdevs": 2, 00:12:08.584 "num_base_bdevs_discovered": 2, 00:12:08.584 "num_base_bdevs_operational": 2, 00:12:08.584 "base_bdevs_list": [ 00:12:08.584 { 00:12:08.584 "name": "spare", 00:12:08.584 "uuid": "86443d35-49b8-5e22-8284-6c14711c97f8", 00:12:08.584 "is_configured": true, 00:12:08.584 "data_offset": 2048, 00:12:08.584 "data_size": 63488 00:12:08.584 }, 00:12:08.584 { 00:12:08.584 "name": "BaseBdev2", 00:12:08.584 "uuid": "5b7ddd3c-4750-57a8-aeac-a6416b54cd5e", 00:12:08.584 "is_configured": true, 00:12:08.584 "data_offset": 2048, 00:12:08.584 "data_size": 63488 00:12:08.584 } 00:12:08.584 ] 00:12:08.584 }' 00:12:08.584 20:07:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:08.584 20:07:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:08.584 20:07:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:08.584 20:07:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:08.584 20:07:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:12:08.584 20:07:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:08.584 20:07:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:08.584 20:07:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:08.584 20:07:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:08.584 20:07:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:08.584 20:07:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.584 20:07:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.584 20:07:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.584 20:07:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:08.584 20:07:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.584 20:07:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:08.584 "name": "raid_bdev1", 00:12:08.584 "uuid": "1ed92630-afef-4c90-a95f-05e09f12a3e8", 00:12:08.584 "strip_size_kb": 0, 00:12:08.584 "state": "online", 00:12:08.584 "raid_level": "raid1", 00:12:08.584 "superblock": true, 00:12:08.584 "num_base_bdevs": 2, 00:12:08.584 "num_base_bdevs_discovered": 2, 00:12:08.584 "num_base_bdevs_operational": 2, 00:12:08.584 "base_bdevs_list": [ 00:12:08.584 { 00:12:08.584 "name": "spare", 00:12:08.584 "uuid": "86443d35-49b8-5e22-8284-6c14711c97f8", 00:12:08.584 "is_configured": true, 00:12:08.584 "data_offset": 2048, 00:12:08.584 "data_size": 63488 00:12:08.584 }, 00:12:08.584 { 00:12:08.584 "name": "BaseBdev2", 00:12:08.584 "uuid": "5b7ddd3c-4750-57a8-aeac-a6416b54cd5e", 00:12:08.584 "is_configured": true, 00:12:08.584 "data_offset": 2048, 00:12:08.584 "data_size": 63488 00:12:08.584 } 00:12:08.584 ] 00:12:08.584 }' 00:12:08.584 20:07:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:08.584 20:07:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:08.584 20:07:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:08.845 20:07:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:08.845 20:07:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:08.845 20:07:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:08.845 20:07:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:08.845 20:07:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:08.845 20:07:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:08.845 20:07:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:08.845 20:07:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:08.845 20:07:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:08.845 20:07:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:08.845 20:07:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:08.845 20:07:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.845 20:07:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:08.845 20:07:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.845 20:07:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.845 20:07:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.845 20:07:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:08.845 "name": "raid_bdev1", 00:12:08.845 "uuid": "1ed92630-afef-4c90-a95f-05e09f12a3e8", 00:12:08.845 "strip_size_kb": 0, 00:12:08.845 "state": "online", 00:12:08.845 "raid_level": "raid1", 00:12:08.845 "superblock": true, 00:12:08.845 "num_base_bdevs": 2, 00:12:08.845 "num_base_bdevs_discovered": 2, 00:12:08.845 "num_base_bdevs_operational": 2, 00:12:08.845 "base_bdevs_list": [ 00:12:08.845 { 00:12:08.845 "name": "spare", 00:12:08.845 "uuid": "86443d35-49b8-5e22-8284-6c14711c97f8", 00:12:08.845 "is_configured": true, 00:12:08.845 "data_offset": 2048, 00:12:08.845 "data_size": 63488 00:12:08.845 }, 00:12:08.845 { 00:12:08.845 "name": "BaseBdev2", 00:12:08.845 "uuid": "5b7ddd3c-4750-57a8-aeac-a6416b54cd5e", 00:12:08.845 "is_configured": true, 00:12:08.845 "data_offset": 2048, 00:12:08.845 "data_size": 63488 00:12:08.845 } 00:12:08.845 ] 00:12:08.845 }' 00:12:08.845 20:07:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:08.845 20:07:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.115 20:07:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:09.115 20:07:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.115 20:07:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.115 [2024-12-08 20:07:41.042121] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:09.115 [2024-12-08 20:07:41.042194] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:09.115 [2024-12-08 20:07:41.042296] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:09.115 [2024-12-08 20:07:41.042455] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:09.115 [2024-12-08 20:07:41.042514] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:09.115 20:07:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.115 20:07:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:09.115 20:07:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.115 20:07:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.115 20:07:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:12:09.115 20:07:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.375 20:07:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:09.375 20:07:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:09.375 20:07:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:12:09.375 20:07:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:12:09.375 20:07:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:09.375 20:07:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:12:09.375 20:07:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:09.375 20:07:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:09.375 20:07:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:09.375 20:07:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:12:09.375 20:07:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:09.375 20:07:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:09.375 20:07:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:12:09.375 /dev/nbd0 00:12:09.375 20:07:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:09.375 20:07:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:09.375 20:07:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:09.375 20:07:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:12:09.375 20:07:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:09.375 20:07:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:09.375 20:07:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:09.375 20:07:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:12:09.375 20:07:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:09.375 20:07:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:09.375 20:07:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:09.375 1+0 records in 00:12:09.375 1+0 records out 00:12:09.375 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00133999 s, 3.1 MB/s 00:12:09.375 20:07:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:09.375 20:07:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:12:09.375 20:07:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:09.375 20:07:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:09.375 20:07:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:12:09.375 20:07:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:09.375 20:07:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:09.375 20:07:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:12:09.635 /dev/nbd1 00:12:09.635 20:07:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:09.635 20:07:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:09.635 20:07:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:12:09.635 20:07:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:12:09.635 20:07:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:09.635 20:07:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:09.635 20:07:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:12:09.635 20:07:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:12:09.635 20:07:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:09.635 20:07:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:09.635 20:07:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:09.635 1+0 records in 00:12:09.635 1+0 records out 00:12:09.635 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000255774 s, 16.0 MB/s 00:12:09.635 20:07:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:09.635 20:07:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:12:09.635 20:07:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:09.635 20:07:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:09.635 20:07:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:12:09.635 20:07:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:09.635 20:07:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:09.635 20:07:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:12:09.895 20:07:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:12:09.895 20:07:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:09.895 20:07:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:09.895 20:07:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:09.895 20:07:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:12:09.895 20:07:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:09.895 20:07:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:10.155 20:07:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:10.155 20:07:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:10.155 20:07:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:10.155 20:07:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:10.155 20:07:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:10.155 20:07:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:10.155 20:07:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:12:10.155 20:07:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:12:10.155 20:07:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:10.155 20:07:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:10.415 20:07:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:10.415 20:07:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:10.415 20:07:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:10.415 20:07:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:10.415 20:07:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:10.415 20:07:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:10.415 20:07:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:12:10.415 20:07:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:12:10.415 20:07:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:12:10.415 20:07:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:12:10.415 20:07:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.415 20:07:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.415 20:07:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.415 20:07:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:10.415 20:07:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.415 20:07:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.415 [2024-12-08 20:07:42.196192] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:10.415 [2024-12-08 20:07:42.196257] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:10.415 [2024-12-08 20:07:42.196284] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:10.415 [2024-12-08 20:07:42.196294] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:10.415 [2024-12-08 20:07:42.198557] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:10.415 [2024-12-08 20:07:42.198596] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:10.415 [2024-12-08 20:07:42.198694] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:12:10.415 [2024-12-08 20:07:42.198743] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:10.415 [2024-12-08 20:07:42.198894] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:10.415 spare 00:12:10.415 20:07:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.415 20:07:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:12:10.415 20:07:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.415 20:07:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.415 [2024-12-08 20:07:42.298822] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:12:10.415 [2024-12-08 20:07:42.298936] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:10.415 [2024-12-08 20:07:42.299283] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1ae0 00:12:10.415 [2024-12-08 20:07:42.299489] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:12:10.415 [2024-12-08 20:07:42.299504] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:12:10.415 [2024-12-08 20:07:42.299682] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:10.415 20:07:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.415 20:07:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:10.415 20:07:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:10.415 20:07:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:10.415 20:07:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:10.415 20:07:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:10.415 20:07:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:10.415 20:07:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:10.415 20:07:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:10.415 20:07:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:10.415 20:07:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:10.415 20:07:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:10.416 20:07:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:10.416 20:07:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.416 20:07:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.416 20:07:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.416 20:07:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:10.416 "name": "raid_bdev1", 00:12:10.416 "uuid": "1ed92630-afef-4c90-a95f-05e09f12a3e8", 00:12:10.416 "strip_size_kb": 0, 00:12:10.416 "state": "online", 00:12:10.416 "raid_level": "raid1", 00:12:10.416 "superblock": true, 00:12:10.416 "num_base_bdevs": 2, 00:12:10.416 "num_base_bdevs_discovered": 2, 00:12:10.416 "num_base_bdevs_operational": 2, 00:12:10.416 "base_bdevs_list": [ 00:12:10.416 { 00:12:10.416 "name": "spare", 00:12:10.416 "uuid": "86443d35-49b8-5e22-8284-6c14711c97f8", 00:12:10.416 "is_configured": true, 00:12:10.416 "data_offset": 2048, 00:12:10.416 "data_size": 63488 00:12:10.416 }, 00:12:10.416 { 00:12:10.416 "name": "BaseBdev2", 00:12:10.416 "uuid": "5b7ddd3c-4750-57a8-aeac-a6416b54cd5e", 00:12:10.416 "is_configured": true, 00:12:10.416 "data_offset": 2048, 00:12:10.416 "data_size": 63488 00:12:10.416 } 00:12:10.416 ] 00:12:10.416 }' 00:12:10.416 20:07:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:10.416 20:07:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.986 20:07:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:10.986 20:07:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:10.986 20:07:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:10.986 20:07:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:10.986 20:07:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:10.986 20:07:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:10.986 20:07:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:10.986 20:07:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.986 20:07:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.986 20:07:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.986 20:07:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:10.986 "name": "raid_bdev1", 00:12:10.986 "uuid": "1ed92630-afef-4c90-a95f-05e09f12a3e8", 00:12:10.986 "strip_size_kb": 0, 00:12:10.986 "state": "online", 00:12:10.986 "raid_level": "raid1", 00:12:10.986 "superblock": true, 00:12:10.986 "num_base_bdevs": 2, 00:12:10.986 "num_base_bdevs_discovered": 2, 00:12:10.986 "num_base_bdevs_operational": 2, 00:12:10.986 "base_bdevs_list": [ 00:12:10.986 { 00:12:10.986 "name": "spare", 00:12:10.986 "uuid": "86443d35-49b8-5e22-8284-6c14711c97f8", 00:12:10.986 "is_configured": true, 00:12:10.986 "data_offset": 2048, 00:12:10.986 "data_size": 63488 00:12:10.986 }, 00:12:10.986 { 00:12:10.986 "name": "BaseBdev2", 00:12:10.986 "uuid": "5b7ddd3c-4750-57a8-aeac-a6416b54cd5e", 00:12:10.986 "is_configured": true, 00:12:10.986 "data_offset": 2048, 00:12:10.986 "data_size": 63488 00:12:10.986 } 00:12:10.986 ] 00:12:10.986 }' 00:12:10.986 20:07:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:10.986 20:07:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:10.986 20:07:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:10.986 20:07:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:10.986 20:07:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:10.986 20:07:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.986 20:07:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.986 20:07:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:12:10.986 20:07:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.986 20:07:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:12:10.986 20:07:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:10.986 20:07:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.986 20:07:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.986 [2024-12-08 20:07:42.899063] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:10.986 20:07:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.986 20:07:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:10.986 20:07:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:10.986 20:07:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:10.986 20:07:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:10.986 20:07:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:10.986 20:07:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:10.986 20:07:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:10.986 20:07:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:10.986 20:07:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:10.986 20:07:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:10.986 20:07:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:10.986 20:07:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:10.986 20:07:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.986 20:07:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.986 20:07:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.986 20:07:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:10.986 "name": "raid_bdev1", 00:12:10.986 "uuid": "1ed92630-afef-4c90-a95f-05e09f12a3e8", 00:12:10.986 "strip_size_kb": 0, 00:12:10.986 "state": "online", 00:12:10.986 "raid_level": "raid1", 00:12:10.986 "superblock": true, 00:12:10.986 "num_base_bdevs": 2, 00:12:10.986 "num_base_bdevs_discovered": 1, 00:12:10.986 "num_base_bdevs_operational": 1, 00:12:10.986 "base_bdevs_list": [ 00:12:10.986 { 00:12:10.986 "name": null, 00:12:10.986 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:10.986 "is_configured": false, 00:12:10.986 "data_offset": 0, 00:12:10.986 "data_size": 63488 00:12:10.986 }, 00:12:10.986 { 00:12:10.986 "name": "BaseBdev2", 00:12:10.986 "uuid": "5b7ddd3c-4750-57a8-aeac-a6416b54cd5e", 00:12:10.986 "is_configured": true, 00:12:10.986 "data_offset": 2048, 00:12:10.986 "data_size": 63488 00:12:10.986 } 00:12:10.986 ] 00:12:10.986 }' 00:12:10.986 20:07:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:10.986 20:07:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.554 20:07:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:11.554 20:07:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.554 20:07:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.554 [2024-12-08 20:07:43.358310] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:11.554 [2024-12-08 20:07:43.358575] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:12:11.554 [2024-12-08 20:07:43.358637] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:12:11.554 [2024-12-08 20:07:43.358738] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:11.554 [2024-12-08 20:07:43.374557] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1bb0 00:12:11.554 20:07:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.554 20:07:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:12:11.554 [2024-12-08 20:07:43.376501] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:12.494 20:07:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:12.494 20:07:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:12.494 20:07:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:12.494 20:07:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:12.494 20:07:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:12.494 20:07:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:12.494 20:07:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:12.494 20:07:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.494 20:07:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.494 20:07:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.494 20:07:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:12.494 "name": "raid_bdev1", 00:12:12.494 "uuid": "1ed92630-afef-4c90-a95f-05e09f12a3e8", 00:12:12.494 "strip_size_kb": 0, 00:12:12.494 "state": "online", 00:12:12.494 "raid_level": "raid1", 00:12:12.494 "superblock": true, 00:12:12.494 "num_base_bdevs": 2, 00:12:12.494 "num_base_bdevs_discovered": 2, 00:12:12.494 "num_base_bdevs_operational": 2, 00:12:12.494 "process": { 00:12:12.494 "type": "rebuild", 00:12:12.494 "target": "spare", 00:12:12.494 "progress": { 00:12:12.494 "blocks": 20480, 00:12:12.494 "percent": 32 00:12:12.494 } 00:12:12.494 }, 00:12:12.494 "base_bdevs_list": [ 00:12:12.494 { 00:12:12.494 "name": "spare", 00:12:12.494 "uuid": "86443d35-49b8-5e22-8284-6c14711c97f8", 00:12:12.494 "is_configured": true, 00:12:12.494 "data_offset": 2048, 00:12:12.494 "data_size": 63488 00:12:12.494 }, 00:12:12.494 { 00:12:12.494 "name": "BaseBdev2", 00:12:12.494 "uuid": "5b7ddd3c-4750-57a8-aeac-a6416b54cd5e", 00:12:12.494 "is_configured": true, 00:12:12.494 "data_offset": 2048, 00:12:12.494 "data_size": 63488 00:12:12.494 } 00:12:12.494 ] 00:12:12.494 }' 00:12:12.494 20:07:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:12.754 20:07:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:12.754 20:07:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:12.754 20:07:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:12.754 20:07:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:12:12.754 20:07:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.754 20:07:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.754 [2024-12-08 20:07:44.511999] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:12.754 [2024-12-08 20:07:44.581528] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:12.754 [2024-12-08 20:07:44.581655] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:12.754 [2024-12-08 20:07:44.581670] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:12.754 [2024-12-08 20:07:44.581680] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:12.754 20:07:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.754 20:07:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:12.754 20:07:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:12.754 20:07:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:12.754 20:07:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:12.754 20:07:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:12.754 20:07:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:12.754 20:07:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:12.754 20:07:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:12.754 20:07:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:12.754 20:07:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:12.754 20:07:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:12.754 20:07:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:12.754 20:07:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.754 20:07:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.754 20:07:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.754 20:07:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:12.754 "name": "raid_bdev1", 00:12:12.754 "uuid": "1ed92630-afef-4c90-a95f-05e09f12a3e8", 00:12:12.754 "strip_size_kb": 0, 00:12:12.754 "state": "online", 00:12:12.754 "raid_level": "raid1", 00:12:12.754 "superblock": true, 00:12:12.754 "num_base_bdevs": 2, 00:12:12.754 "num_base_bdevs_discovered": 1, 00:12:12.754 "num_base_bdevs_operational": 1, 00:12:12.754 "base_bdevs_list": [ 00:12:12.754 { 00:12:12.754 "name": null, 00:12:12.754 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:12.754 "is_configured": false, 00:12:12.754 "data_offset": 0, 00:12:12.754 "data_size": 63488 00:12:12.754 }, 00:12:12.754 { 00:12:12.754 "name": "BaseBdev2", 00:12:12.754 "uuid": "5b7ddd3c-4750-57a8-aeac-a6416b54cd5e", 00:12:12.754 "is_configured": true, 00:12:12.754 "data_offset": 2048, 00:12:12.754 "data_size": 63488 00:12:12.754 } 00:12:12.754 ] 00:12:12.754 }' 00:12:12.754 20:07:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:12.754 20:07:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.322 20:07:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:13.322 20:07:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.322 20:07:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.322 [2024-12-08 20:07:45.044429] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:13.322 [2024-12-08 20:07:45.044554] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:13.322 [2024-12-08 20:07:45.044592] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:12:13.322 [2024-12-08 20:07:45.044622] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:13.322 [2024-12-08 20:07:45.045187] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:13.322 [2024-12-08 20:07:45.045256] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:13.322 [2024-12-08 20:07:45.045410] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:12:13.322 [2024-12-08 20:07:45.045457] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:12:13.322 [2024-12-08 20:07:45.045508] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:12:13.322 [2024-12-08 20:07:45.045570] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:13.322 [2024-12-08 20:07:45.061089] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:12:13.322 spare 00:12:13.322 20:07:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.322 [2024-12-08 20:07:45.062924] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:13.322 20:07:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:12:14.259 20:07:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:14.259 20:07:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:14.259 20:07:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:14.259 20:07:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:14.259 20:07:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:14.259 20:07:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:14.259 20:07:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.259 20:07:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:14.259 20:07:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.259 20:07:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.259 20:07:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:14.259 "name": "raid_bdev1", 00:12:14.259 "uuid": "1ed92630-afef-4c90-a95f-05e09f12a3e8", 00:12:14.259 "strip_size_kb": 0, 00:12:14.259 "state": "online", 00:12:14.259 "raid_level": "raid1", 00:12:14.259 "superblock": true, 00:12:14.259 "num_base_bdevs": 2, 00:12:14.259 "num_base_bdevs_discovered": 2, 00:12:14.259 "num_base_bdevs_operational": 2, 00:12:14.259 "process": { 00:12:14.259 "type": "rebuild", 00:12:14.259 "target": "spare", 00:12:14.259 "progress": { 00:12:14.259 "blocks": 20480, 00:12:14.259 "percent": 32 00:12:14.259 } 00:12:14.259 }, 00:12:14.259 "base_bdevs_list": [ 00:12:14.259 { 00:12:14.259 "name": "spare", 00:12:14.259 "uuid": "86443d35-49b8-5e22-8284-6c14711c97f8", 00:12:14.259 "is_configured": true, 00:12:14.259 "data_offset": 2048, 00:12:14.259 "data_size": 63488 00:12:14.259 }, 00:12:14.259 { 00:12:14.259 "name": "BaseBdev2", 00:12:14.259 "uuid": "5b7ddd3c-4750-57a8-aeac-a6416b54cd5e", 00:12:14.259 "is_configured": true, 00:12:14.259 "data_offset": 2048, 00:12:14.259 "data_size": 63488 00:12:14.259 } 00:12:14.259 ] 00:12:14.259 }' 00:12:14.259 20:07:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:14.260 20:07:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:14.260 20:07:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:14.260 20:07:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:14.260 20:07:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:12:14.260 20:07:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.260 20:07:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.260 [2024-12-08 20:07:46.226449] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:14.518 [2024-12-08 20:07:46.267959] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:14.518 [2024-12-08 20:07:46.268066] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:14.518 [2024-12-08 20:07:46.268086] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:14.518 [2024-12-08 20:07:46.268094] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:14.518 20:07:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.518 20:07:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:14.518 20:07:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:14.518 20:07:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:14.518 20:07:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:14.518 20:07:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:14.518 20:07:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:14.518 20:07:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:14.518 20:07:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:14.518 20:07:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:14.518 20:07:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:14.518 20:07:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:14.518 20:07:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:14.518 20:07:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.518 20:07:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.518 20:07:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.518 20:07:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:14.518 "name": "raid_bdev1", 00:12:14.518 "uuid": "1ed92630-afef-4c90-a95f-05e09f12a3e8", 00:12:14.518 "strip_size_kb": 0, 00:12:14.518 "state": "online", 00:12:14.518 "raid_level": "raid1", 00:12:14.518 "superblock": true, 00:12:14.518 "num_base_bdevs": 2, 00:12:14.518 "num_base_bdevs_discovered": 1, 00:12:14.518 "num_base_bdevs_operational": 1, 00:12:14.518 "base_bdevs_list": [ 00:12:14.518 { 00:12:14.518 "name": null, 00:12:14.518 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:14.518 "is_configured": false, 00:12:14.518 "data_offset": 0, 00:12:14.518 "data_size": 63488 00:12:14.518 }, 00:12:14.518 { 00:12:14.518 "name": "BaseBdev2", 00:12:14.518 "uuid": "5b7ddd3c-4750-57a8-aeac-a6416b54cd5e", 00:12:14.518 "is_configured": true, 00:12:14.518 "data_offset": 2048, 00:12:14.518 "data_size": 63488 00:12:14.518 } 00:12:14.518 ] 00:12:14.518 }' 00:12:14.518 20:07:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:14.518 20:07:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.124 20:07:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:15.124 20:07:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:15.124 20:07:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:15.124 20:07:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:15.124 20:07:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:15.124 20:07:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:15.124 20:07:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.124 20:07:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:15.124 20:07:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.124 20:07:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.124 20:07:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:15.124 "name": "raid_bdev1", 00:12:15.124 "uuid": "1ed92630-afef-4c90-a95f-05e09f12a3e8", 00:12:15.124 "strip_size_kb": 0, 00:12:15.124 "state": "online", 00:12:15.124 "raid_level": "raid1", 00:12:15.124 "superblock": true, 00:12:15.124 "num_base_bdevs": 2, 00:12:15.124 "num_base_bdevs_discovered": 1, 00:12:15.124 "num_base_bdevs_operational": 1, 00:12:15.124 "base_bdevs_list": [ 00:12:15.124 { 00:12:15.124 "name": null, 00:12:15.124 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:15.124 "is_configured": false, 00:12:15.124 "data_offset": 0, 00:12:15.124 "data_size": 63488 00:12:15.124 }, 00:12:15.124 { 00:12:15.124 "name": "BaseBdev2", 00:12:15.124 "uuid": "5b7ddd3c-4750-57a8-aeac-a6416b54cd5e", 00:12:15.124 "is_configured": true, 00:12:15.124 "data_offset": 2048, 00:12:15.124 "data_size": 63488 00:12:15.124 } 00:12:15.124 ] 00:12:15.124 }' 00:12:15.124 20:07:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:15.124 20:07:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:15.124 20:07:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:15.124 20:07:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:15.124 20:07:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:12:15.124 20:07:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.124 20:07:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.124 20:07:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.124 20:07:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:15.124 20:07:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.124 20:07:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.124 [2024-12-08 20:07:46.905670] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:15.124 [2024-12-08 20:07:46.905764] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:15.124 [2024-12-08 20:07:46.905796] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:12:15.124 [2024-12-08 20:07:46.905817] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:15.124 [2024-12-08 20:07:46.906282] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:15.124 [2024-12-08 20:07:46.906307] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:15.124 [2024-12-08 20:07:46.906391] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:12:15.124 [2024-12-08 20:07:46.906405] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:12:15.124 [2024-12-08 20:07:46.906416] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:12:15.124 [2024-12-08 20:07:46.906426] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:12:15.124 BaseBdev1 00:12:15.124 20:07:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.124 20:07:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:12:16.073 20:07:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:16.073 20:07:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:16.073 20:07:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:16.073 20:07:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:16.073 20:07:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:16.073 20:07:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:16.073 20:07:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:16.073 20:07:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:16.073 20:07:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:16.073 20:07:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:16.073 20:07:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:16.073 20:07:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.073 20:07:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:16.073 20:07:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.073 20:07:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.073 20:07:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:16.073 "name": "raid_bdev1", 00:12:16.073 "uuid": "1ed92630-afef-4c90-a95f-05e09f12a3e8", 00:12:16.073 "strip_size_kb": 0, 00:12:16.073 "state": "online", 00:12:16.073 "raid_level": "raid1", 00:12:16.073 "superblock": true, 00:12:16.073 "num_base_bdevs": 2, 00:12:16.073 "num_base_bdevs_discovered": 1, 00:12:16.073 "num_base_bdevs_operational": 1, 00:12:16.073 "base_bdevs_list": [ 00:12:16.073 { 00:12:16.073 "name": null, 00:12:16.073 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:16.073 "is_configured": false, 00:12:16.073 "data_offset": 0, 00:12:16.073 "data_size": 63488 00:12:16.073 }, 00:12:16.073 { 00:12:16.073 "name": "BaseBdev2", 00:12:16.073 "uuid": "5b7ddd3c-4750-57a8-aeac-a6416b54cd5e", 00:12:16.073 "is_configured": true, 00:12:16.073 "data_offset": 2048, 00:12:16.073 "data_size": 63488 00:12:16.073 } 00:12:16.073 ] 00:12:16.073 }' 00:12:16.073 20:07:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:16.073 20:07:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.643 20:07:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:16.643 20:07:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:16.643 20:07:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:16.643 20:07:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:16.643 20:07:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:16.643 20:07:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:16.643 20:07:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:16.643 20:07:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.643 20:07:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.643 20:07:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.643 20:07:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:16.643 "name": "raid_bdev1", 00:12:16.643 "uuid": "1ed92630-afef-4c90-a95f-05e09f12a3e8", 00:12:16.643 "strip_size_kb": 0, 00:12:16.643 "state": "online", 00:12:16.643 "raid_level": "raid1", 00:12:16.643 "superblock": true, 00:12:16.643 "num_base_bdevs": 2, 00:12:16.643 "num_base_bdevs_discovered": 1, 00:12:16.643 "num_base_bdevs_operational": 1, 00:12:16.643 "base_bdevs_list": [ 00:12:16.643 { 00:12:16.643 "name": null, 00:12:16.643 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:16.643 "is_configured": false, 00:12:16.643 "data_offset": 0, 00:12:16.643 "data_size": 63488 00:12:16.643 }, 00:12:16.643 { 00:12:16.643 "name": "BaseBdev2", 00:12:16.643 "uuid": "5b7ddd3c-4750-57a8-aeac-a6416b54cd5e", 00:12:16.643 "is_configured": true, 00:12:16.643 "data_offset": 2048, 00:12:16.643 "data_size": 63488 00:12:16.643 } 00:12:16.643 ] 00:12:16.643 }' 00:12:16.643 20:07:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:16.643 20:07:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:16.643 20:07:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:16.643 20:07:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:16.643 20:07:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:16.643 20:07:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:12:16.643 20:07:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:16.643 20:07:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:12:16.643 20:07:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:16.643 20:07:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:12:16.643 20:07:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:16.643 20:07:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:16.643 20:07:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.643 20:07:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.643 [2024-12-08 20:07:48.518942] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:16.643 [2024-12-08 20:07:48.519124] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:12:16.643 [2024-12-08 20:07:48.519142] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:12:16.643 request: 00:12:16.643 { 00:12:16.643 "base_bdev": "BaseBdev1", 00:12:16.643 "raid_bdev": "raid_bdev1", 00:12:16.643 "method": "bdev_raid_add_base_bdev", 00:12:16.643 "req_id": 1 00:12:16.643 } 00:12:16.643 Got JSON-RPC error response 00:12:16.643 response: 00:12:16.643 { 00:12:16.643 "code": -22, 00:12:16.643 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:12:16.643 } 00:12:16.643 20:07:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:12:16.643 20:07:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:12:16.643 20:07:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:16.643 20:07:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:16.643 20:07:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:16.643 20:07:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:12:17.583 20:07:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:17.583 20:07:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:17.583 20:07:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:17.583 20:07:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:17.583 20:07:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:17.583 20:07:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:17.583 20:07:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:17.583 20:07:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:17.583 20:07:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:17.583 20:07:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:17.583 20:07:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:17.583 20:07:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:17.583 20:07:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.583 20:07:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.842 20:07:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.842 20:07:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:17.842 "name": "raid_bdev1", 00:12:17.842 "uuid": "1ed92630-afef-4c90-a95f-05e09f12a3e8", 00:12:17.842 "strip_size_kb": 0, 00:12:17.842 "state": "online", 00:12:17.842 "raid_level": "raid1", 00:12:17.842 "superblock": true, 00:12:17.842 "num_base_bdevs": 2, 00:12:17.842 "num_base_bdevs_discovered": 1, 00:12:17.842 "num_base_bdevs_operational": 1, 00:12:17.842 "base_bdevs_list": [ 00:12:17.842 { 00:12:17.842 "name": null, 00:12:17.842 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:17.842 "is_configured": false, 00:12:17.842 "data_offset": 0, 00:12:17.842 "data_size": 63488 00:12:17.842 }, 00:12:17.842 { 00:12:17.842 "name": "BaseBdev2", 00:12:17.842 "uuid": "5b7ddd3c-4750-57a8-aeac-a6416b54cd5e", 00:12:17.842 "is_configured": true, 00:12:17.842 "data_offset": 2048, 00:12:17.842 "data_size": 63488 00:12:17.842 } 00:12:17.842 ] 00:12:17.842 }' 00:12:17.842 20:07:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:17.842 20:07:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.102 20:07:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:18.102 20:07:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:18.102 20:07:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:18.102 20:07:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:18.102 20:07:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:18.102 20:07:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.102 20:07:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:18.102 20:07:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.102 20:07:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.102 20:07:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.102 20:07:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:18.102 "name": "raid_bdev1", 00:12:18.102 "uuid": "1ed92630-afef-4c90-a95f-05e09f12a3e8", 00:12:18.102 "strip_size_kb": 0, 00:12:18.102 "state": "online", 00:12:18.102 "raid_level": "raid1", 00:12:18.102 "superblock": true, 00:12:18.102 "num_base_bdevs": 2, 00:12:18.102 "num_base_bdevs_discovered": 1, 00:12:18.102 "num_base_bdevs_operational": 1, 00:12:18.102 "base_bdevs_list": [ 00:12:18.102 { 00:12:18.102 "name": null, 00:12:18.102 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:18.102 "is_configured": false, 00:12:18.102 "data_offset": 0, 00:12:18.102 "data_size": 63488 00:12:18.102 }, 00:12:18.102 { 00:12:18.102 "name": "BaseBdev2", 00:12:18.102 "uuid": "5b7ddd3c-4750-57a8-aeac-a6416b54cd5e", 00:12:18.102 "is_configured": true, 00:12:18.102 "data_offset": 2048, 00:12:18.102 "data_size": 63488 00:12:18.102 } 00:12:18.102 ] 00:12:18.102 }' 00:12:18.102 20:07:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:18.102 20:07:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:18.102 20:07:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:18.361 20:07:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:18.361 20:07:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 75486 00:12:18.361 20:07:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 75486 ']' 00:12:18.361 20:07:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 75486 00:12:18.361 20:07:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:12:18.361 20:07:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:18.361 20:07:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75486 00:12:18.361 20:07:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:18.361 20:07:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:18.361 killing process with pid 75486 00:12:18.361 Received shutdown signal, test time was about 60.000000 seconds 00:12:18.361 00:12:18.361 Latency(us) 00:12:18.361 [2024-12-08T20:07:50.339Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:18.361 [2024-12-08T20:07:50.339Z] =================================================================================================================== 00:12:18.361 [2024-12-08T20:07:50.339Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:18.361 20:07:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75486' 00:12:18.361 20:07:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 75486 00:12:18.361 [2024-12-08 20:07:50.143527] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:18.361 [2024-12-08 20:07:50.143652] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:18.361 20:07:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 75486 00:12:18.361 [2024-12-08 20:07:50.143705] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:18.361 [2024-12-08 20:07:50.143716] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:12:18.621 [2024-12-08 20:07:50.441447] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:19.563 20:07:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:12:19.563 00:12:19.563 real 0m22.958s 00:12:19.563 user 0m28.292s 00:12:19.563 sys 0m3.558s 00:12:19.563 20:07:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:19.563 20:07:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.563 ************************************ 00:12:19.563 END TEST raid_rebuild_test_sb 00:12:19.563 ************************************ 00:12:19.854 20:07:51 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true true 00:12:19.854 20:07:51 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:12:19.854 20:07:51 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:19.854 20:07:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:19.854 ************************************ 00:12:19.854 START TEST raid_rebuild_test_io 00:12:19.854 ************************************ 00:12:19.854 20:07:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 false true true 00:12:19.854 20:07:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:19.854 20:07:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:12:19.854 20:07:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:12:19.854 20:07:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:12:19.854 20:07:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:19.854 20:07:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:19.854 20:07:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:19.854 20:07:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:19.854 20:07:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:19.855 20:07:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:19.855 20:07:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:19.855 20:07:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:19.855 20:07:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:19.855 20:07:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:19.855 20:07:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:19.855 20:07:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:19.855 20:07:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:19.855 20:07:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:19.855 20:07:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:19.855 20:07:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:19.855 20:07:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:19.855 20:07:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:19.855 20:07:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:12:19.855 20:07:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=76216 00:12:19.855 20:07:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 76216 00:12:19.855 20:07:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:19.855 20:07:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # '[' -z 76216 ']' 00:12:19.855 20:07:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:19.855 20:07:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:19.855 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:19.855 20:07:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:19.855 20:07:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:19.855 20:07:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:19.855 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:19.855 Zero copy mechanism will not be used. 00:12:19.855 [2024-12-08 20:07:51.708909] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:12:19.855 [2024-12-08 20:07:51.709039] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76216 ] 00:12:20.114 [2024-12-08 20:07:51.875534] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:20.114 [2024-12-08 20:07:51.985305] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:20.373 [2024-12-08 20:07:52.186045] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:20.373 [2024-12-08 20:07:52.186078] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:20.633 20:07:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:20.633 20:07:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # return 0 00:12:20.633 20:07:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:20.633 20:07:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:20.633 20:07:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.633 20:07:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:20.633 BaseBdev1_malloc 00:12:20.633 20:07:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.633 20:07:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:20.633 20:07:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.633 20:07:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:20.633 [2024-12-08 20:07:52.573192] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:20.633 [2024-12-08 20:07:52.573260] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:20.633 [2024-12-08 20:07:52.573281] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:20.633 [2024-12-08 20:07:52.573291] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:20.633 [2024-12-08 20:07:52.575295] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:20.633 [2024-12-08 20:07:52.575329] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:20.633 BaseBdev1 00:12:20.633 20:07:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.633 20:07:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:20.633 20:07:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:20.633 20:07:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.633 20:07:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:20.892 BaseBdev2_malloc 00:12:20.892 20:07:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.892 20:07:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:20.892 20:07:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.892 20:07:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:20.892 [2024-12-08 20:07:52.628429] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:20.892 [2024-12-08 20:07:52.628481] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:20.892 [2024-12-08 20:07:52.628503] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:20.892 [2024-12-08 20:07:52.628513] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:20.892 [2024-12-08 20:07:52.630503] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:20.892 [2024-12-08 20:07:52.630538] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:20.892 BaseBdev2 00:12:20.892 20:07:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.892 20:07:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:20.892 20:07:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.892 20:07:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:20.892 spare_malloc 00:12:20.892 20:07:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.892 20:07:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:20.892 20:07:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.892 20:07:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:20.892 spare_delay 00:12:20.892 20:07:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.892 20:07:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:20.892 20:07:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.892 20:07:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:20.892 [2024-12-08 20:07:52.710693] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:20.892 [2024-12-08 20:07:52.710745] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:20.892 [2024-12-08 20:07:52.710763] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:12:20.893 [2024-12-08 20:07:52.710773] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:20.893 [2024-12-08 20:07:52.712854] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:20.893 [2024-12-08 20:07:52.712890] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:20.893 spare 00:12:20.893 20:07:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.893 20:07:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:12:20.893 20:07:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.893 20:07:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:20.893 [2024-12-08 20:07:52.722723] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:20.893 [2024-12-08 20:07:52.724503] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:20.893 [2024-12-08 20:07:52.724593] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:20.893 [2024-12-08 20:07:52.724606] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:20.893 [2024-12-08 20:07:52.724866] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:20.893 [2024-12-08 20:07:52.725063] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:20.893 [2024-12-08 20:07:52.725083] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:20.893 [2024-12-08 20:07:52.725242] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:20.893 20:07:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.893 20:07:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:20.893 20:07:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:20.893 20:07:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:20.893 20:07:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:20.893 20:07:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:20.893 20:07:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:20.893 20:07:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:20.893 20:07:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:20.893 20:07:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:20.893 20:07:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:20.893 20:07:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:20.893 20:07:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.893 20:07:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:20.893 20:07:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:20.893 20:07:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.893 20:07:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:20.893 "name": "raid_bdev1", 00:12:20.893 "uuid": "09cf9e69-a10b-4a5a-8aad-1403472a539b", 00:12:20.893 "strip_size_kb": 0, 00:12:20.893 "state": "online", 00:12:20.893 "raid_level": "raid1", 00:12:20.893 "superblock": false, 00:12:20.893 "num_base_bdevs": 2, 00:12:20.893 "num_base_bdevs_discovered": 2, 00:12:20.893 "num_base_bdevs_operational": 2, 00:12:20.893 "base_bdevs_list": [ 00:12:20.893 { 00:12:20.893 "name": "BaseBdev1", 00:12:20.893 "uuid": "47a1f924-ad78-56ee-874c-13d8f8762b05", 00:12:20.893 "is_configured": true, 00:12:20.893 "data_offset": 0, 00:12:20.893 "data_size": 65536 00:12:20.893 }, 00:12:20.893 { 00:12:20.893 "name": "BaseBdev2", 00:12:20.893 "uuid": "696905af-f742-5819-a47a-2581d15586a2", 00:12:20.893 "is_configured": true, 00:12:20.893 "data_offset": 0, 00:12:20.893 "data_size": 65536 00:12:20.893 } 00:12:20.893 ] 00:12:20.893 }' 00:12:20.893 20:07:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:20.893 20:07:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:21.461 20:07:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:21.461 20:07:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:21.461 20:07:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.461 20:07:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:21.461 [2024-12-08 20:07:53.138281] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:21.461 20:07:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.461 20:07:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:12:21.461 20:07:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:21.461 20:07:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.461 20:07:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:21.461 20:07:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:21.461 20:07:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.461 20:07:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:12:21.461 20:07:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:12:21.461 20:07:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:21.461 20:07:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:21.461 20:07:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.461 20:07:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:21.461 [2024-12-08 20:07:53.225827] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:21.461 20:07:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.461 20:07:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:21.461 20:07:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:21.461 20:07:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:21.461 20:07:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:21.461 20:07:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:21.461 20:07:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:21.461 20:07:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:21.461 20:07:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:21.461 20:07:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:21.461 20:07:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:21.461 20:07:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:21.461 20:07:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:21.461 20:07:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.461 20:07:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:21.461 20:07:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.461 20:07:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:21.461 "name": "raid_bdev1", 00:12:21.461 "uuid": "09cf9e69-a10b-4a5a-8aad-1403472a539b", 00:12:21.461 "strip_size_kb": 0, 00:12:21.461 "state": "online", 00:12:21.461 "raid_level": "raid1", 00:12:21.461 "superblock": false, 00:12:21.461 "num_base_bdevs": 2, 00:12:21.461 "num_base_bdevs_discovered": 1, 00:12:21.461 "num_base_bdevs_operational": 1, 00:12:21.461 "base_bdevs_list": [ 00:12:21.461 { 00:12:21.461 "name": null, 00:12:21.461 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:21.461 "is_configured": false, 00:12:21.461 "data_offset": 0, 00:12:21.461 "data_size": 65536 00:12:21.461 }, 00:12:21.461 { 00:12:21.461 "name": "BaseBdev2", 00:12:21.461 "uuid": "696905af-f742-5819-a47a-2581d15586a2", 00:12:21.461 "is_configured": true, 00:12:21.461 "data_offset": 0, 00:12:21.461 "data_size": 65536 00:12:21.461 } 00:12:21.461 ] 00:12:21.461 }' 00:12:21.461 20:07:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:21.461 20:07:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:21.461 [2024-12-08 20:07:53.322037] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:12:21.461 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:21.461 Zero copy mechanism will not be used. 00:12:21.461 Running I/O for 60 seconds... 00:12:21.720 20:07:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:21.720 20:07:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.720 20:07:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:21.720 [2024-12-08 20:07:53.653170] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:21.978 20:07:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.978 20:07:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:21.978 [2024-12-08 20:07:53.725523] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:12:21.978 [2024-12-08 20:07:53.727392] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:21.978 [2024-12-08 20:07:53.873066] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:22.257 [2024-12-08 20:07:53.974835] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:22.257 [2024-12-08 20:07:53.975135] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:22.257 [2024-12-08 20:07:54.219326] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:12:22.257 [2024-12-08 20:07:54.219741] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:12:22.517 175.00 IOPS, 525.00 MiB/s [2024-12-08T20:07:54.495Z] [2024-12-08 20:07:54.430493] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:22.517 [2024-12-08 20:07:54.430867] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:22.778 20:07:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:22.778 20:07:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:22.778 20:07:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:22.778 20:07:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:22.778 20:07:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:22.778 20:07:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:22.778 20:07:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.778 20:07:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:22.778 20:07:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:22.778 20:07:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.778 20:07:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:22.778 "name": "raid_bdev1", 00:12:22.778 "uuid": "09cf9e69-a10b-4a5a-8aad-1403472a539b", 00:12:22.778 "strip_size_kb": 0, 00:12:22.778 "state": "online", 00:12:22.778 "raid_level": "raid1", 00:12:22.778 "superblock": false, 00:12:22.778 "num_base_bdevs": 2, 00:12:22.778 "num_base_bdevs_discovered": 2, 00:12:22.778 "num_base_bdevs_operational": 2, 00:12:22.778 "process": { 00:12:22.778 "type": "rebuild", 00:12:22.778 "target": "spare", 00:12:22.778 "progress": { 00:12:22.778 "blocks": 12288, 00:12:22.778 "percent": 18 00:12:22.778 } 00:12:22.778 }, 00:12:22.778 "base_bdevs_list": [ 00:12:22.778 { 00:12:22.778 "name": "spare", 00:12:22.778 "uuid": "4213a1f0-56e6-52f3-a679-ce6ea9bc8acb", 00:12:22.778 "is_configured": true, 00:12:22.778 "data_offset": 0, 00:12:22.778 "data_size": 65536 00:12:22.778 }, 00:12:22.778 { 00:12:22.778 "name": "BaseBdev2", 00:12:22.778 "uuid": "696905af-f742-5819-a47a-2581d15586a2", 00:12:22.778 "is_configured": true, 00:12:22.778 "data_offset": 0, 00:12:22.778 "data_size": 65536 00:12:22.778 } 00:12:22.778 ] 00:12:22.778 }' 00:12:23.039 20:07:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:23.039 [2024-12-08 20:07:54.786569] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:12:23.039 20:07:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:23.039 20:07:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:23.039 20:07:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:23.039 20:07:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:23.039 20:07:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.039 20:07:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:23.039 [2024-12-08 20:07:54.858764] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:23.039 [2024-12-08 20:07:54.900883] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:12:23.039 [2024-12-08 20:07:54.901221] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:12:23.039 [2024-12-08 20:07:55.008161] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:23.300 [2024-12-08 20:07:55.016490] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:23.300 [2024-12-08 20:07:55.016530] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:23.300 [2024-12-08 20:07:55.016545] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:23.300 [2024-12-08 20:07:55.058428] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:12:23.300 20:07:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.300 20:07:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:23.300 20:07:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:23.300 20:07:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:23.300 20:07:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:23.300 20:07:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:23.300 20:07:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:23.300 20:07:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:23.300 20:07:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:23.300 20:07:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:23.300 20:07:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:23.300 20:07:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:23.300 20:07:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.300 20:07:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:23.300 20:07:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:23.300 20:07:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.300 20:07:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:23.300 "name": "raid_bdev1", 00:12:23.300 "uuid": "09cf9e69-a10b-4a5a-8aad-1403472a539b", 00:12:23.300 "strip_size_kb": 0, 00:12:23.300 "state": "online", 00:12:23.300 "raid_level": "raid1", 00:12:23.300 "superblock": false, 00:12:23.300 "num_base_bdevs": 2, 00:12:23.300 "num_base_bdevs_discovered": 1, 00:12:23.300 "num_base_bdevs_operational": 1, 00:12:23.300 "base_bdevs_list": [ 00:12:23.300 { 00:12:23.300 "name": null, 00:12:23.300 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:23.300 "is_configured": false, 00:12:23.300 "data_offset": 0, 00:12:23.300 "data_size": 65536 00:12:23.300 }, 00:12:23.300 { 00:12:23.300 "name": "BaseBdev2", 00:12:23.300 "uuid": "696905af-f742-5819-a47a-2581d15586a2", 00:12:23.300 "is_configured": true, 00:12:23.300 "data_offset": 0, 00:12:23.300 "data_size": 65536 00:12:23.300 } 00:12:23.300 ] 00:12:23.300 }' 00:12:23.300 20:07:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:23.300 20:07:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:23.560 151.00 IOPS, 453.00 MiB/s [2024-12-08T20:07:55.538Z] 20:07:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:23.560 20:07:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:23.560 20:07:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:23.560 20:07:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:23.560 20:07:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:23.560 20:07:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:23.560 20:07:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:23.560 20:07:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.560 20:07:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:23.560 20:07:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.560 20:07:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:23.560 "name": "raid_bdev1", 00:12:23.560 "uuid": "09cf9e69-a10b-4a5a-8aad-1403472a539b", 00:12:23.560 "strip_size_kb": 0, 00:12:23.560 "state": "online", 00:12:23.560 "raid_level": "raid1", 00:12:23.560 "superblock": false, 00:12:23.560 "num_base_bdevs": 2, 00:12:23.560 "num_base_bdevs_discovered": 1, 00:12:23.560 "num_base_bdevs_operational": 1, 00:12:23.560 "base_bdevs_list": [ 00:12:23.560 { 00:12:23.560 "name": null, 00:12:23.560 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:23.560 "is_configured": false, 00:12:23.560 "data_offset": 0, 00:12:23.560 "data_size": 65536 00:12:23.560 }, 00:12:23.560 { 00:12:23.560 "name": "BaseBdev2", 00:12:23.560 "uuid": "696905af-f742-5819-a47a-2581d15586a2", 00:12:23.560 "is_configured": true, 00:12:23.560 "data_offset": 0, 00:12:23.560 "data_size": 65536 00:12:23.560 } 00:12:23.560 ] 00:12:23.560 }' 00:12:23.560 20:07:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:23.560 20:07:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:23.819 20:07:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:23.819 20:07:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:23.819 20:07:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:23.819 20:07:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.819 20:07:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:23.819 [2024-12-08 20:07:55.567868] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:23.819 20:07:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.819 20:07:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:23.819 [2024-12-08 20:07:55.633755] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:12:23.819 [2024-12-08 20:07:55.635670] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:23.819 [2024-12-08 20:07:55.747818] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:23.819 [2024-12-08 20:07:55.748364] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:24.078 [2024-12-08 20:07:55.969549] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:24.078 [2024-12-08 20:07:55.969885] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:24.338 [2024-12-08 20:07:56.193247] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:12:24.338 [2024-12-08 20:07:56.307147] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:24.857 144.67 IOPS, 434.00 MiB/s [2024-12-08T20:07:56.835Z] 20:07:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:24.857 20:07:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:24.857 20:07:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:24.857 20:07:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:24.857 20:07:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:24.857 [2024-12-08 20:07:56.629240] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:12:24.857 20:07:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:24.857 20:07:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:24.857 20:07:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.857 20:07:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:24.857 20:07:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.857 20:07:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:24.857 "name": "raid_bdev1", 00:12:24.857 "uuid": "09cf9e69-a10b-4a5a-8aad-1403472a539b", 00:12:24.857 "strip_size_kb": 0, 00:12:24.857 "state": "online", 00:12:24.857 "raid_level": "raid1", 00:12:24.857 "superblock": false, 00:12:24.857 "num_base_bdevs": 2, 00:12:24.857 "num_base_bdevs_discovered": 2, 00:12:24.857 "num_base_bdevs_operational": 2, 00:12:24.857 "process": { 00:12:24.857 "type": "rebuild", 00:12:24.857 "target": "spare", 00:12:24.857 "progress": { 00:12:24.858 "blocks": 14336, 00:12:24.858 "percent": 21 00:12:24.858 } 00:12:24.858 }, 00:12:24.858 "base_bdevs_list": [ 00:12:24.858 { 00:12:24.858 "name": "spare", 00:12:24.858 "uuid": "4213a1f0-56e6-52f3-a679-ce6ea9bc8acb", 00:12:24.858 "is_configured": true, 00:12:24.858 "data_offset": 0, 00:12:24.858 "data_size": 65536 00:12:24.858 }, 00:12:24.858 { 00:12:24.858 "name": "BaseBdev2", 00:12:24.858 "uuid": "696905af-f742-5819-a47a-2581d15586a2", 00:12:24.858 "is_configured": true, 00:12:24.858 "data_offset": 0, 00:12:24.858 "data_size": 65536 00:12:24.858 } 00:12:24.858 ] 00:12:24.858 }' 00:12:24.858 20:07:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:24.858 20:07:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:24.858 20:07:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:24.858 20:07:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:24.858 20:07:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:12:24.858 20:07:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:12:24.858 20:07:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:24.858 20:07:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:12:24.858 20:07:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=398 00:12:24.858 20:07:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:24.858 20:07:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:24.858 20:07:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:24.858 20:07:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:24.858 20:07:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:24.858 20:07:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:24.858 20:07:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:24.858 20:07:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:24.858 20:07:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.858 20:07:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:24.858 20:07:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.858 20:07:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:24.858 "name": "raid_bdev1", 00:12:24.858 "uuid": "09cf9e69-a10b-4a5a-8aad-1403472a539b", 00:12:24.858 "strip_size_kb": 0, 00:12:24.858 "state": "online", 00:12:24.858 "raid_level": "raid1", 00:12:24.858 "superblock": false, 00:12:24.858 "num_base_bdevs": 2, 00:12:24.858 "num_base_bdevs_discovered": 2, 00:12:24.858 "num_base_bdevs_operational": 2, 00:12:24.858 "process": { 00:12:24.858 "type": "rebuild", 00:12:24.858 "target": "spare", 00:12:24.858 "progress": { 00:12:24.858 "blocks": 14336, 00:12:24.858 "percent": 21 00:12:24.858 } 00:12:24.858 }, 00:12:24.858 "base_bdevs_list": [ 00:12:24.858 { 00:12:24.858 "name": "spare", 00:12:24.858 "uuid": "4213a1f0-56e6-52f3-a679-ce6ea9bc8acb", 00:12:24.858 "is_configured": true, 00:12:24.858 "data_offset": 0, 00:12:24.858 "data_size": 65536 00:12:24.858 }, 00:12:24.858 { 00:12:24.858 "name": "BaseBdev2", 00:12:24.858 "uuid": "696905af-f742-5819-a47a-2581d15586a2", 00:12:24.858 "is_configured": true, 00:12:24.858 "data_offset": 0, 00:12:24.858 "data_size": 65536 00:12:24.858 } 00:12:24.858 ] 00:12:24.858 }' 00:12:24.858 20:07:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:25.123 [2024-12-08 20:07:56.844887] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:12:25.123 20:07:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:25.123 20:07:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:25.123 20:07:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:25.123 20:07:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:25.388 [2024-12-08 20:07:57.182576] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:12:25.958 119.25 IOPS, 357.75 MiB/s [2024-12-08T20:07:57.936Z] [2024-12-08 20:07:57.829186] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:12:26.218 20:07:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:26.218 20:07:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:26.218 20:07:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:26.218 20:07:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:26.218 20:07:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:26.218 20:07:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:26.218 20:07:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:26.218 20:07:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:26.218 20:07:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.218 20:07:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:26.218 20:07:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.218 20:07:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:26.218 "name": "raid_bdev1", 00:12:26.218 "uuid": "09cf9e69-a10b-4a5a-8aad-1403472a539b", 00:12:26.218 "strip_size_kb": 0, 00:12:26.218 "state": "online", 00:12:26.218 "raid_level": "raid1", 00:12:26.218 "superblock": false, 00:12:26.218 "num_base_bdevs": 2, 00:12:26.218 "num_base_bdevs_discovered": 2, 00:12:26.218 "num_base_bdevs_operational": 2, 00:12:26.218 "process": { 00:12:26.218 "type": "rebuild", 00:12:26.218 "target": "spare", 00:12:26.218 "progress": { 00:12:26.218 "blocks": 32768, 00:12:26.218 "percent": 50 00:12:26.218 } 00:12:26.218 }, 00:12:26.218 "base_bdevs_list": [ 00:12:26.218 { 00:12:26.218 "name": "spare", 00:12:26.218 "uuid": "4213a1f0-56e6-52f3-a679-ce6ea9bc8acb", 00:12:26.218 "is_configured": true, 00:12:26.218 "data_offset": 0, 00:12:26.218 "data_size": 65536 00:12:26.218 }, 00:12:26.218 { 00:12:26.218 "name": "BaseBdev2", 00:12:26.218 "uuid": "696905af-f742-5819-a47a-2581d15586a2", 00:12:26.218 "is_configured": true, 00:12:26.218 "data_offset": 0, 00:12:26.218 "data_size": 65536 00:12:26.218 } 00:12:26.218 ] 00:12:26.218 }' 00:12:26.218 20:07:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:26.218 20:07:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:26.218 20:07:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:26.219 20:07:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:26.219 20:07:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:26.219 [2024-12-08 20:07:58.171759] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:12:26.479 112.20 IOPS, 336.60 MiB/s [2024-12-08T20:07:58.457Z] [2024-12-08 20:07:58.375589] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:12:26.739 [2024-12-08 20:07:58.699196] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:12:27.309 20:07:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:27.309 20:07:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:27.309 20:07:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:27.309 20:07:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:27.309 20:07:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:27.309 20:07:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:27.309 20:07:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:27.309 20:07:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:27.309 20:07:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.309 20:07:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:27.309 20:07:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.309 20:07:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:27.309 "name": "raid_bdev1", 00:12:27.309 "uuid": "09cf9e69-a10b-4a5a-8aad-1403472a539b", 00:12:27.309 "strip_size_kb": 0, 00:12:27.309 "state": "online", 00:12:27.309 "raid_level": "raid1", 00:12:27.309 "superblock": false, 00:12:27.309 "num_base_bdevs": 2, 00:12:27.309 "num_base_bdevs_discovered": 2, 00:12:27.309 "num_base_bdevs_operational": 2, 00:12:27.309 "process": { 00:12:27.309 "type": "rebuild", 00:12:27.309 "target": "spare", 00:12:27.309 "progress": { 00:12:27.309 "blocks": 49152, 00:12:27.309 "percent": 75 00:12:27.309 } 00:12:27.309 }, 00:12:27.309 "base_bdevs_list": [ 00:12:27.309 { 00:12:27.309 "name": "spare", 00:12:27.309 "uuid": "4213a1f0-56e6-52f3-a679-ce6ea9bc8acb", 00:12:27.309 "is_configured": true, 00:12:27.309 "data_offset": 0, 00:12:27.309 "data_size": 65536 00:12:27.309 }, 00:12:27.309 { 00:12:27.309 "name": "BaseBdev2", 00:12:27.309 "uuid": "696905af-f742-5819-a47a-2581d15586a2", 00:12:27.309 "is_configured": true, 00:12:27.309 "data_offset": 0, 00:12:27.309 "data_size": 65536 00:12:27.309 } 00:12:27.309 ] 00:12:27.309 }' 00:12:27.309 20:07:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:27.309 20:07:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:27.309 20:07:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:27.309 20:07:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:27.309 20:07:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:28.140 98.00 IOPS, 294.00 MiB/s [2024-12-08T20:08:00.118Z] [2024-12-08 20:07:59.902724] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:28.140 [2024-12-08 20:08:00.002492] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:28.140 [2024-12-08 20:08:00.004518] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:28.399 20:08:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:28.399 20:08:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:28.399 20:08:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:28.399 20:08:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:28.399 20:08:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:28.399 20:08:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:28.399 20:08:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:28.399 20:08:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:28.399 20:08:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.399 20:08:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:28.399 20:08:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.399 20:08:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:28.399 "name": "raid_bdev1", 00:12:28.399 "uuid": "09cf9e69-a10b-4a5a-8aad-1403472a539b", 00:12:28.399 "strip_size_kb": 0, 00:12:28.399 "state": "online", 00:12:28.399 "raid_level": "raid1", 00:12:28.399 "superblock": false, 00:12:28.399 "num_base_bdevs": 2, 00:12:28.399 "num_base_bdevs_discovered": 2, 00:12:28.399 "num_base_bdevs_operational": 2, 00:12:28.399 "base_bdevs_list": [ 00:12:28.399 { 00:12:28.399 "name": "spare", 00:12:28.399 "uuid": "4213a1f0-56e6-52f3-a679-ce6ea9bc8acb", 00:12:28.399 "is_configured": true, 00:12:28.399 "data_offset": 0, 00:12:28.399 "data_size": 65536 00:12:28.399 }, 00:12:28.399 { 00:12:28.399 "name": "BaseBdev2", 00:12:28.399 "uuid": "696905af-f742-5819-a47a-2581d15586a2", 00:12:28.399 "is_configured": true, 00:12:28.399 "data_offset": 0, 00:12:28.399 "data_size": 65536 00:12:28.399 } 00:12:28.399 ] 00:12:28.399 }' 00:12:28.399 20:08:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:28.399 20:08:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:28.399 20:08:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:28.399 88.14 IOPS, 264.43 MiB/s [2024-12-08T20:08:00.377Z] 20:08:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:28.399 20:08:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:12:28.399 20:08:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:28.399 20:08:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:28.399 20:08:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:28.399 20:08:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:28.399 20:08:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:28.399 20:08:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:28.399 20:08:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:28.399 20:08:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.399 20:08:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:28.659 20:08:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.660 20:08:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:28.660 "name": "raid_bdev1", 00:12:28.660 "uuid": "09cf9e69-a10b-4a5a-8aad-1403472a539b", 00:12:28.660 "strip_size_kb": 0, 00:12:28.660 "state": "online", 00:12:28.660 "raid_level": "raid1", 00:12:28.660 "superblock": false, 00:12:28.660 "num_base_bdevs": 2, 00:12:28.660 "num_base_bdevs_discovered": 2, 00:12:28.660 "num_base_bdevs_operational": 2, 00:12:28.660 "base_bdevs_list": [ 00:12:28.660 { 00:12:28.660 "name": "spare", 00:12:28.660 "uuid": "4213a1f0-56e6-52f3-a679-ce6ea9bc8acb", 00:12:28.660 "is_configured": true, 00:12:28.660 "data_offset": 0, 00:12:28.660 "data_size": 65536 00:12:28.660 }, 00:12:28.660 { 00:12:28.660 "name": "BaseBdev2", 00:12:28.660 "uuid": "696905af-f742-5819-a47a-2581d15586a2", 00:12:28.660 "is_configured": true, 00:12:28.660 "data_offset": 0, 00:12:28.660 "data_size": 65536 00:12:28.660 } 00:12:28.660 ] 00:12:28.660 }' 00:12:28.660 20:08:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:28.660 20:08:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:28.660 20:08:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:28.660 20:08:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:28.660 20:08:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:28.660 20:08:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:28.660 20:08:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:28.660 20:08:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:28.660 20:08:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:28.660 20:08:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:28.660 20:08:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:28.660 20:08:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:28.660 20:08:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:28.660 20:08:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:28.660 20:08:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:28.660 20:08:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:28.660 20:08:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.660 20:08:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:28.660 20:08:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.660 20:08:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:28.660 "name": "raid_bdev1", 00:12:28.660 "uuid": "09cf9e69-a10b-4a5a-8aad-1403472a539b", 00:12:28.660 "strip_size_kb": 0, 00:12:28.660 "state": "online", 00:12:28.660 "raid_level": "raid1", 00:12:28.660 "superblock": false, 00:12:28.660 "num_base_bdevs": 2, 00:12:28.660 "num_base_bdevs_discovered": 2, 00:12:28.660 "num_base_bdevs_operational": 2, 00:12:28.660 "base_bdevs_list": [ 00:12:28.660 { 00:12:28.660 "name": "spare", 00:12:28.660 "uuid": "4213a1f0-56e6-52f3-a679-ce6ea9bc8acb", 00:12:28.660 "is_configured": true, 00:12:28.660 "data_offset": 0, 00:12:28.660 "data_size": 65536 00:12:28.660 }, 00:12:28.660 { 00:12:28.660 "name": "BaseBdev2", 00:12:28.660 "uuid": "696905af-f742-5819-a47a-2581d15586a2", 00:12:28.660 "is_configured": true, 00:12:28.660 "data_offset": 0, 00:12:28.660 "data_size": 65536 00:12:28.660 } 00:12:28.660 ] 00:12:28.660 }' 00:12:28.660 20:08:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:28.660 20:08:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:29.229 20:08:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:29.229 20:08:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.229 20:08:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:29.229 [2024-12-08 20:08:00.923159] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:29.229 [2024-12-08 20:08:00.923195] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:29.229 00:12:29.229 Latency(us) 00:12:29.229 [2024-12-08T20:08:01.207Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:29.229 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:12:29.229 raid_bdev1 : 7.69 83.27 249.81 0.00 0.00 16303.06 282.61 115389.15 00:12:29.229 [2024-12-08T20:08:01.207Z] =================================================================================================================== 00:12:29.229 [2024-12-08T20:08:01.207Z] Total : 83.27 249.81 0.00 0.00 16303.06 282.61 115389.15 00:12:29.229 [2024-12-08 20:08:01.016481] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:29.229 [2024-12-08 20:08:01.016568] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:29.229 [2024-12-08 20:08:01.016641] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:29.229 [2024-12-08 20:08:01.016653] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:29.229 { 00:12:29.229 "results": [ 00:12:29.229 { 00:12:29.229 "job": "raid_bdev1", 00:12:29.229 "core_mask": "0x1", 00:12:29.229 "workload": "randrw", 00:12:29.229 "percentage": 50, 00:12:29.229 "status": "finished", 00:12:29.229 "queue_depth": 2, 00:12:29.229 "io_size": 3145728, 00:12:29.229 "runtime": 7.685746, 00:12:29.229 "iops": 83.27103185559346, 00:12:29.229 "mibps": 249.8130955667804, 00:12:29.229 "io_failed": 0, 00:12:29.229 "io_timeout": 0, 00:12:29.229 "avg_latency_us": 16303.062358078603, 00:12:29.229 "min_latency_us": 282.6061135371179, 00:12:29.229 "max_latency_us": 115389.14934497817 00:12:29.229 } 00:12:29.229 ], 00:12:29.229 "core_count": 1 00:12:29.229 } 00:12:29.229 20:08:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.229 20:08:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:29.229 20:08:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:12:29.229 20:08:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.229 20:08:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:29.229 20:08:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.229 20:08:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:29.229 20:08:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:29.229 20:08:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:12:29.229 20:08:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:12:29.229 20:08:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:29.229 20:08:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:12:29.229 20:08:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:29.229 20:08:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:29.229 20:08:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:29.229 20:08:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:12:29.229 20:08:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:29.229 20:08:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:29.229 20:08:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:12:29.489 /dev/nbd0 00:12:29.489 20:08:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:29.489 20:08:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:29.489 20:08:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:29.489 20:08:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:12:29.489 20:08:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:29.489 20:08:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:29.489 20:08:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:29.489 20:08:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:12:29.489 20:08:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:29.489 20:08:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:29.489 20:08:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:29.489 1+0 records in 00:12:29.489 1+0 records out 00:12:29.489 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000358977 s, 11.4 MB/s 00:12:29.489 20:08:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:29.489 20:08:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:12:29.489 20:08:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:29.489 20:08:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:29.489 20:08:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:12:29.489 20:08:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:29.489 20:08:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:29.489 20:08:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:12:29.489 20:08:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:12:29.489 20:08:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:12:29.489 20:08:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:29.489 20:08:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:12:29.489 20:08:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:29.489 20:08:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:12:29.489 20:08:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:29.489 20:08:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:12:29.489 20:08:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:29.489 20:08:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:29.489 20:08:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:12:29.750 /dev/nbd1 00:12:29.750 20:08:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:29.750 20:08:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:29.750 20:08:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:12:29.750 20:08:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:12:29.750 20:08:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:29.750 20:08:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:29.750 20:08:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:12:29.750 20:08:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:12:29.750 20:08:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:29.750 20:08:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:29.750 20:08:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:29.750 1+0 records in 00:12:29.750 1+0 records out 00:12:29.750 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000401234 s, 10.2 MB/s 00:12:29.750 20:08:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:29.750 20:08:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:12:29.750 20:08:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:29.750 20:08:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:29.750 20:08:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:12:29.750 20:08:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:29.750 20:08:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:29.750 20:08:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:12:30.010 20:08:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:12:30.010 20:08:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:30.010 20:08:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:12:30.010 20:08:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:30.010 20:08:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:12:30.010 20:08:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:30.010 20:08:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:30.010 20:08:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:30.010 20:08:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:30.010 20:08:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:30.010 20:08:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:30.010 20:08:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:30.010 20:08:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:30.010 20:08:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:12:30.010 20:08:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:12:30.010 20:08:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:30.010 20:08:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:30.010 20:08:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:30.010 20:08:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:30.010 20:08:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:12:30.010 20:08:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:30.010 20:08:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:30.270 20:08:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:30.270 20:08:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:30.270 20:08:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:30.270 20:08:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:30.270 20:08:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:30.270 20:08:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:30.270 20:08:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:12:30.270 20:08:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:12:30.270 20:08:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:12:30.270 20:08:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 76216 00:12:30.270 20:08:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # '[' -z 76216 ']' 00:12:30.270 20:08:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # kill -0 76216 00:12:30.270 20:08:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # uname 00:12:30.270 20:08:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:30.270 20:08:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76216 00:12:30.270 20:08:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:30.270 20:08:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:30.270 killing process with pid 76216 00:12:30.270 20:08:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76216' 00:12:30.270 20:08:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@973 -- # kill 76216 00:12:30.270 Received shutdown signal, test time was about 8.915866 seconds 00:12:30.270 00:12:30.270 Latency(us) 00:12:30.270 [2024-12-08T20:08:02.248Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:30.270 [2024-12-08T20:08:02.248Z] =================================================================================================================== 00:12:30.270 [2024-12-08T20:08:02.248Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:30.270 [2024-12-08 20:08:02.222709] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:30.270 20:08:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@978 -- # wait 76216 00:12:30.531 [2024-12-08 20:08:02.436755] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:31.908 20:08:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:12:31.908 00:12:31.908 real 0m11.955s 00:12:31.908 user 0m14.979s 00:12:31.908 sys 0m1.424s 00:12:31.908 20:08:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:31.908 20:08:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:31.908 ************************************ 00:12:31.908 END TEST raid_rebuild_test_io 00:12:31.908 ************************************ 00:12:31.908 20:08:03 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true true 00:12:31.908 20:08:03 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:12:31.909 20:08:03 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:31.909 20:08:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:31.909 ************************************ 00:12:31.909 START TEST raid_rebuild_test_sb_io 00:12:31.909 ************************************ 00:12:31.909 20:08:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true true true 00:12:31.909 20:08:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:31.909 20:08:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:12:31.909 20:08:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:12:31.909 20:08:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:12:31.909 20:08:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:31.909 20:08:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:31.909 20:08:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:31.909 20:08:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:31.909 20:08:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:31.909 20:08:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:31.909 20:08:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:31.909 20:08:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:31.909 20:08:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:31.909 20:08:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:31.909 20:08:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:31.909 20:08:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:31.909 20:08:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:31.909 20:08:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:31.909 20:08:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:31.909 20:08:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:31.909 20:08:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:31.909 20:08:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:31.909 20:08:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:12:31.909 20:08:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:12:31.909 20:08:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=76586 00:12:31.909 20:08:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 76586 00:12:31.909 20:08:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # '[' -z 76586 ']' 00:12:31.909 20:08:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:31.909 20:08:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:31.909 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:31.909 20:08:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:31.909 20:08:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:31.909 20:08:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:31.909 20:08:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:31.909 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:31.909 Zero copy mechanism will not be used. 00:12:31.909 [2024-12-08 20:08:03.732813] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:12:31.909 [2024-12-08 20:08:03.732934] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76586 ] 00:12:32.168 [2024-12-08 20:08:03.904005] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:32.168 [2024-12-08 20:08:04.012436] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:32.427 [2024-12-08 20:08:04.207026] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:32.427 [2024-12-08 20:08:04.207085] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:32.687 20:08:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:32.687 20:08:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # return 0 00:12:32.687 20:08:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:32.687 20:08:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:32.687 20:08:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.687 20:08:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:32.687 BaseBdev1_malloc 00:12:32.687 20:08:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.687 20:08:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:32.687 20:08:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.687 20:08:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:32.687 [2024-12-08 20:08:04.595080] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:32.687 [2024-12-08 20:08:04.595133] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:32.687 [2024-12-08 20:08:04.595154] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:32.687 [2024-12-08 20:08:04.595165] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:32.687 [2024-12-08 20:08:04.597161] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:32.687 [2024-12-08 20:08:04.597196] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:32.687 BaseBdev1 00:12:32.687 20:08:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.687 20:08:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:32.687 20:08:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:32.687 20:08:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.687 20:08:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:32.687 BaseBdev2_malloc 00:12:32.687 20:08:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.687 20:08:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:32.687 20:08:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.687 20:08:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:32.687 [2024-12-08 20:08:04.640120] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:32.687 [2024-12-08 20:08:04.640172] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:32.687 [2024-12-08 20:08:04.640192] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:32.687 [2024-12-08 20:08:04.640203] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:32.687 [2024-12-08 20:08:04.642150] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:32.687 [2024-12-08 20:08:04.642182] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:32.687 BaseBdev2 00:12:32.687 20:08:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.687 20:08:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:32.687 20:08:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.687 20:08:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:32.948 spare_malloc 00:12:32.948 20:08:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.948 20:08:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:32.948 20:08:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.948 20:08:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:32.948 spare_delay 00:12:32.948 20:08:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.948 20:08:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:32.948 20:08:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.948 20:08:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:32.948 [2024-12-08 20:08:04.703904] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:32.948 [2024-12-08 20:08:04.703974] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:32.948 [2024-12-08 20:08:04.703995] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:12:32.948 [2024-12-08 20:08:04.704006] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:32.948 [2024-12-08 20:08:04.706141] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:32.948 [2024-12-08 20:08:04.706172] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:32.948 spare 00:12:32.948 20:08:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.948 20:08:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:12:32.948 20:08:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.948 20:08:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:32.948 [2024-12-08 20:08:04.711954] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:32.948 [2024-12-08 20:08:04.713801] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:32.948 [2024-12-08 20:08:04.714005] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:32.948 [2024-12-08 20:08:04.714030] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:32.948 [2024-12-08 20:08:04.714283] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:32.948 [2024-12-08 20:08:04.714469] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:32.948 [2024-12-08 20:08:04.714487] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:32.948 [2024-12-08 20:08:04.714643] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:32.948 20:08:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.948 20:08:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:32.948 20:08:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:32.948 20:08:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:32.948 20:08:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:32.948 20:08:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:32.948 20:08:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:32.948 20:08:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:32.948 20:08:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:32.948 20:08:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:32.948 20:08:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:32.948 20:08:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:32.948 20:08:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:32.948 20:08:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.948 20:08:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:32.948 20:08:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.948 20:08:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:32.948 "name": "raid_bdev1", 00:12:32.948 "uuid": "e80c71b8-0e6e-4ab9-8574-410c2ed57a12", 00:12:32.948 "strip_size_kb": 0, 00:12:32.948 "state": "online", 00:12:32.948 "raid_level": "raid1", 00:12:32.948 "superblock": true, 00:12:32.948 "num_base_bdevs": 2, 00:12:32.948 "num_base_bdevs_discovered": 2, 00:12:32.948 "num_base_bdevs_operational": 2, 00:12:32.948 "base_bdevs_list": [ 00:12:32.948 { 00:12:32.948 "name": "BaseBdev1", 00:12:32.948 "uuid": "29761f14-da64-59c8-aa2d-9f89bdf53726", 00:12:32.948 "is_configured": true, 00:12:32.948 "data_offset": 2048, 00:12:32.948 "data_size": 63488 00:12:32.948 }, 00:12:32.948 { 00:12:32.948 "name": "BaseBdev2", 00:12:32.948 "uuid": "c33e84bd-323c-5950-8f34-a34c82af0975", 00:12:32.948 "is_configured": true, 00:12:32.948 "data_offset": 2048, 00:12:32.948 "data_size": 63488 00:12:32.948 } 00:12:32.948 ] 00:12:32.948 }' 00:12:32.948 20:08:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:32.948 20:08:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:33.208 20:08:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:33.208 20:08:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.208 20:08:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:33.208 20:08:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:33.208 [2024-12-08 20:08:05.075641] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:33.208 20:08:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.208 20:08:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:12:33.208 20:08:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:33.208 20:08:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:33.208 20:08:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.208 20:08:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:33.208 20:08:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.208 20:08:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:12:33.208 20:08:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:12:33.208 20:08:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:33.208 20:08:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.208 20:08:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:33.208 20:08:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:33.208 [2024-12-08 20:08:05.159168] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:33.208 20:08:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.208 20:08:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:33.208 20:08:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:33.208 20:08:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:33.208 20:08:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:33.208 20:08:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:33.208 20:08:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:33.208 20:08:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:33.208 20:08:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:33.208 20:08:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:33.208 20:08:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:33.208 20:08:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:33.208 20:08:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.208 20:08:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:33.208 20:08:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:33.208 20:08:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.468 20:08:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:33.468 "name": "raid_bdev1", 00:12:33.468 "uuid": "e80c71b8-0e6e-4ab9-8574-410c2ed57a12", 00:12:33.468 "strip_size_kb": 0, 00:12:33.468 "state": "online", 00:12:33.468 "raid_level": "raid1", 00:12:33.468 "superblock": true, 00:12:33.468 "num_base_bdevs": 2, 00:12:33.468 "num_base_bdevs_discovered": 1, 00:12:33.468 "num_base_bdevs_operational": 1, 00:12:33.468 "base_bdevs_list": [ 00:12:33.468 { 00:12:33.468 "name": null, 00:12:33.468 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:33.468 "is_configured": false, 00:12:33.468 "data_offset": 0, 00:12:33.468 "data_size": 63488 00:12:33.468 }, 00:12:33.468 { 00:12:33.468 "name": "BaseBdev2", 00:12:33.468 "uuid": "c33e84bd-323c-5950-8f34-a34c82af0975", 00:12:33.468 "is_configured": true, 00:12:33.468 "data_offset": 2048, 00:12:33.468 "data_size": 63488 00:12:33.468 } 00:12:33.468 ] 00:12:33.468 }' 00:12:33.468 20:08:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:33.468 20:08:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:33.468 [2024-12-08 20:08:05.258213] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:12:33.468 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:33.468 Zero copy mechanism will not be used. 00:12:33.468 Running I/O for 60 seconds... 00:12:33.727 20:08:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:33.727 20:08:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.727 20:08:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:33.727 [2024-12-08 20:08:05.551736] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:33.727 20:08:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.727 20:08:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:33.727 [2024-12-08 20:08:05.590215] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:12:33.727 [2024-12-08 20:08:05.592050] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:33.986 [2024-12-08 20:08:05.705342] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:33.986 [2024-12-08 20:08:05.705980] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:33.986 [2024-12-08 20:08:05.920471] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:33.986 [2024-12-08 20:08:05.920775] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:34.555 [2024-12-08 20:08:06.247440] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:12:34.555 156.00 IOPS, 468.00 MiB/s [2024-12-08T20:08:06.533Z] [2024-12-08 20:08:06.355082] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:34.829 [2024-12-08 20:08:06.578211] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:12:34.829 [2024-12-08 20:08:06.578776] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:12:34.829 20:08:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:34.829 20:08:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:34.829 20:08:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:34.829 20:08:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:34.829 20:08:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:34.829 20:08:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:34.829 20:08:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:34.829 20:08:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.829 20:08:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:34.829 20:08:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.829 20:08:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:34.829 "name": "raid_bdev1", 00:12:34.829 "uuid": "e80c71b8-0e6e-4ab9-8574-410c2ed57a12", 00:12:34.829 "strip_size_kb": 0, 00:12:34.829 "state": "online", 00:12:34.829 "raid_level": "raid1", 00:12:34.829 "superblock": true, 00:12:34.829 "num_base_bdevs": 2, 00:12:34.829 "num_base_bdevs_discovered": 2, 00:12:34.829 "num_base_bdevs_operational": 2, 00:12:34.829 "process": { 00:12:34.829 "type": "rebuild", 00:12:34.829 "target": "spare", 00:12:34.829 "progress": { 00:12:34.830 "blocks": 14336, 00:12:34.830 "percent": 22 00:12:34.830 } 00:12:34.830 }, 00:12:34.830 "base_bdevs_list": [ 00:12:34.830 { 00:12:34.830 "name": "spare", 00:12:34.830 "uuid": "73a10fab-9297-5a12-9ca3-66afd748bbe7", 00:12:34.830 "is_configured": true, 00:12:34.830 "data_offset": 2048, 00:12:34.830 "data_size": 63488 00:12:34.830 }, 00:12:34.830 { 00:12:34.830 "name": "BaseBdev2", 00:12:34.830 "uuid": "c33e84bd-323c-5950-8f34-a34c82af0975", 00:12:34.830 "is_configured": true, 00:12:34.830 "data_offset": 2048, 00:12:34.830 "data_size": 63488 00:12:34.830 } 00:12:34.830 ] 00:12:34.830 }' 00:12:34.830 20:08:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:34.830 20:08:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:34.830 20:08:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:34.830 20:08:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:34.830 20:08:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:34.830 20:08:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.830 20:08:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:34.830 [2024-12-08 20:08:06.716965] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:34.830 [2024-12-08 20:08:06.794169] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:12:35.089 [2024-12-08 20:08:06.901080] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:35.089 [2024-12-08 20:08:06.915403] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:35.089 [2024-12-08 20:08:06.915466] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:35.089 [2024-12-08 20:08:06.915480] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:35.089 [2024-12-08 20:08:06.957824] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:12:35.089 20:08:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.089 20:08:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:35.089 20:08:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:35.089 20:08:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:35.089 20:08:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:35.089 20:08:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:35.089 20:08:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:35.089 20:08:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:35.089 20:08:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:35.089 20:08:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:35.089 20:08:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:35.089 20:08:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:35.089 20:08:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.089 20:08:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:35.089 20:08:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:35.089 20:08:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.089 20:08:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:35.089 "name": "raid_bdev1", 00:12:35.089 "uuid": "e80c71b8-0e6e-4ab9-8574-410c2ed57a12", 00:12:35.089 "strip_size_kb": 0, 00:12:35.089 "state": "online", 00:12:35.089 "raid_level": "raid1", 00:12:35.089 "superblock": true, 00:12:35.089 "num_base_bdevs": 2, 00:12:35.089 "num_base_bdevs_discovered": 1, 00:12:35.089 "num_base_bdevs_operational": 1, 00:12:35.089 "base_bdevs_list": [ 00:12:35.089 { 00:12:35.089 "name": null, 00:12:35.089 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:35.089 "is_configured": false, 00:12:35.089 "data_offset": 0, 00:12:35.089 "data_size": 63488 00:12:35.089 }, 00:12:35.089 { 00:12:35.089 "name": "BaseBdev2", 00:12:35.089 "uuid": "c33e84bd-323c-5950-8f34-a34c82af0975", 00:12:35.089 "is_configured": true, 00:12:35.089 "data_offset": 2048, 00:12:35.089 "data_size": 63488 00:12:35.089 } 00:12:35.089 ] 00:12:35.089 }' 00:12:35.089 20:08:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:35.089 20:08:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:35.611 154.50 IOPS, 463.50 MiB/s [2024-12-08T20:08:07.589Z] 20:08:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:35.611 20:08:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:35.611 20:08:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:35.611 20:08:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:35.611 20:08:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:35.611 20:08:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:35.611 20:08:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.611 20:08:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:35.611 20:08:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:35.611 20:08:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.611 20:08:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:35.611 "name": "raid_bdev1", 00:12:35.612 "uuid": "e80c71b8-0e6e-4ab9-8574-410c2ed57a12", 00:12:35.612 "strip_size_kb": 0, 00:12:35.612 "state": "online", 00:12:35.612 "raid_level": "raid1", 00:12:35.612 "superblock": true, 00:12:35.612 "num_base_bdevs": 2, 00:12:35.612 "num_base_bdevs_discovered": 1, 00:12:35.612 "num_base_bdevs_operational": 1, 00:12:35.612 "base_bdevs_list": [ 00:12:35.612 { 00:12:35.612 "name": null, 00:12:35.612 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:35.612 "is_configured": false, 00:12:35.612 "data_offset": 0, 00:12:35.612 "data_size": 63488 00:12:35.612 }, 00:12:35.612 { 00:12:35.612 "name": "BaseBdev2", 00:12:35.612 "uuid": "c33e84bd-323c-5950-8f34-a34c82af0975", 00:12:35.612 "is_configured": true, 00:12:35.612 "data_offset": 2048, 00:12:35.612 "data_size": 63488 00:12:35.612 } 00:12:35.612 ] 00:12:35.612 }' 00:12:35.612 20:08:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:35.612 20:08:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:35.612 20:08:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:35.612 20:08:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:35.612 20:08:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:35.612 20:08:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.612 20:08:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:35.612 [2024-12-08 20:08:07.558262] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:35.880 20:08:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.880 20:08:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:35.880 [2024-12-08 20:08:07.614439] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:12:35.880 [2024-12-08 20:08:07.616391] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:35.880 [2024-12-08 20:08:07.724343] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:35.880 [2024-12-08 20:08:07.725019] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:35.880 [2024-12-08 20:08:07.845364] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:35.880 [2024-12-08 20:08:07.845718] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:36.464 156.67 IOPS, 470.00 MiB/s [2024-12-08T20:08:08.442Z] [2024-12-08 20:08:08.309114] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:36.724 20:08:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:36.724 20:08:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:36.724 20:08:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:36.724 20:08:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:36.724 20:08:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:36.724 20:08:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:36.724 20:08:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:36.724 20:08:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.724 20:08:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:36.724 20:08:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.724 20:08:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:36.724 "name": "raid_bdev1", 00:12:36.724 "uuid": "e80c71b8-0e6e-4ab9-8574-410c2ed57a12", 00:12:36.724 "strip_size_kb": 0, 00:12:36.724 "state": "online", 00:12:36.724 "raid_level": "raid1", 00:12:36.724 "superblock": true, 00:12:36.724 "num_base_bdevs": 2, 00:12:36.724 "num_base_bdevs_discovered": 2, 00:12:36.724 "num_base_bdevs_operational": 2, 00:12:36.724 "process": { 00:12:36.724 "type": "rebuild", 00:12:36.724 "target": "spare", 00:12:36.724 "progress": { 00:12:36.724 "blocks": 14336, 00:12:36.724 "percent": 22 00:12:36.724 } 00:12:36.724 }, 00:12:36.724 "base_bdevs_list": [ 00:12:36.724 { 00:12:36.724 "name": "spare", 00:12:36.724 "uuid": "73a10fab-9297-5a12-9ca3-66afd748bbe7", 00:12:36.724 "is_configured": true, 00:12:36.724 "data_offset": 2048, 00:12:36.724 "data_size": 63488 00:12:36.724 }, 00:12:36.724 { 00:12:36.724 "name": "BaseBdev2", 00:12:36.724 "uuid": "c33e84bd-323c-5950-8f34-a34c82af0975", 00:12:36.724 "is_configured": true, 00:12:36.724 "data_offset": 2048, 00:12:36.724 "data_size": 63488 00:12:36.724 } 00:12:36.724 ] 00:12:36.724 }' 00:12:36.724 20:08:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:36.724 [2024-12-08 20:08:08.659985] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:12:36.724 20:08:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:36.724 20:08:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:36.985 20:08:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:36.985 20:08:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:12:36.985 20:08:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:12:36.985 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:12:36.985 20:08:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:12:36.985 20:08:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:36.985 20:08:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:12:36.985 20:08:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=410 00:12:36.985 20:08:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:36.985 20:08:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:36.985 20:08:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:36.985 20:08:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:36.985 20:08:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:36.985 20:08:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:36.985 20:08:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:36.985 20:08:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.985 20:08:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:36.985 20:08:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:36.985 20:08:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.985 20:08:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:36.985 "name": "raid_bdev1", 00:12:36.985 "uuid": "e80c71b8-0e6e-4ab9-8574-410c2ed57a12", 00:12:36.985 "strip_size_kb": 0, 00:12:36.985 "state": "online", 00:12:36.985 "raid_level": "raid1", 00:12:36.985 "superblock": true, 00:12:36.985 "num_base_bdevs": 2, 00:12:36.985 "num_base_bdevs_discovered": 2, 00:12:36.985 "num_base_bdevs_operational": 2, 00:12:36.985 "process": { 00:12:36.985 "type": "rebuild", 00:12:36.985 "target": "spare", 00:12:36.985 "progress": { 00:12:36.985 "blocks": 16384, 00:12:36.985 "percent": 25 00:12:36.985 } 00:12:36.985 }, 00:12:36.985 "base_bdevs_list": [ 00:12:36.985 { 00:12:36.985 "name": "spare", 00:12:36.985 "uuid": "73a10fab-9297-5a12-9ca3-66afd748bbe7", 00:12:36.985 "is_configured": true, 00:12:36.985 "data_offset": 2048, 00:12:36.985 "data_size": 63488 00:12:36.985 }, 00:12:36.985 { 00:12:36.985 "name": "BaseBdev2", 00:12:36.985 "uuid": "c33e84bd-323c-5950-8f34-a34c82af0975", 00:12:36.985 "is_configured": true, 00:12:36.985 "data_offset": 2048, 00:12:36.985 "data_size": 63488 00:12:36.985 } 00:12:36.985 ] 00:12:36.985 }' 00:12:36.985 20:08:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:36.985 20:08:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:36.985 20:08:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:36.985 20:08:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:36.985 20:08:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:36.985 [2024-12-08 20:08:08.880429] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:12:37.245 [2024-12-08 20:08:09.221122] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:12:37.245 [2024-12-08 20:08:09.221772] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:12:37.506 135.50 IOPS, 406.50 MiB/s [2024-12-08T20:08:09.484Z] [2024-12-08 20:08:09.429995] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:12:37.506 [2024-12-08 20:08:09.430366] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:12:37.766 [2024-12-08 20:08:09.741050] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:12:38.026 20:08:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:38.026 20:08:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:38.026 20:08:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:38.026 20:08:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:38.026 20:08:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:38.026 20:08:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:38.026 20:08:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:38.026 20:08:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:38.026 20:08:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.026 20:08:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:38.026 20:08:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.026 20:08:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:38.026 "name": "raid_bdev1", 00:12:38.026 "uuid": "e80c71b8-0e6e-4ab9-8574-410c2ed57a12", 00:12:38.026 "strip_size_kb": 0, 00:12:38.026 "state": "online", 00:12:38.026 "raid_level": "raid1", 00:12:38.026 "superblock": true, 00:12:38.026 "num_base_bdevs": 2, 00:12:38.026 "num_base_bdevs_discovered": 2, 00:12:38.026 "num_base_bdevs_operational": 2, 00:12:38.026 "process": { 00:12:38.026 "type": "rebuild", 00:12:38.026 "target": "spare", 00:12:38.026 "progress": { 00:12:38.026 "blocks": 34816, 00:12:38.026 "percent": 54 00:12:38.026 } 00:12:38.026 }, 00:12:38.026 "base_bdevs_list": [ 00:12:38.026 { 00:12:38.026 "name": "spare", 00:12:38.026 "uuid": "73a10fab-9297-5a12-9ca3-66afd748bbe7", 00:12:38.026 "is_configured": true, 00:12:38.026 "data_offset": 2048, 00:12:38.026 "data_size": 63488 00:12:38.026 }, 00:12:38.026 { 00:12:38.026 "name": "BaseBdev2", 00:12:38.026 "uuid": "c33e84bd-323c-5950-8f34-a34c82af0975", 00:12:38.026 "is_configured": true, 00:12:38.026 "data_offset": 2048, 00:12:38.026 "data_size": 63488 00:12:38.026 } 00:12:38.026 ] 00:12:38.026 }' 00:12:38.026 20:08:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:38.026 20:08:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:38.026 20:08:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:38.286 20:08:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:38.286 20:08:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:38.286 [2024-12-08 20:08:10.060416] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:12:38.286 [2024-12-08 20:08:10.188098] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:12:38.806 120.00 IOPS, 360.00 MiB/s [2024-12-08T20:08:10.784Z] [2024-12-08 20:08:10.524387] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:12:39.066 20:08:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:39.066 20:08:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:39.066 20:08:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:39.066 20:08:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:39.066 20:08:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:39.066 20:08:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:39.066 20:08:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:39.066 20:08:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:39.066 20:08:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.066 20:08:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:39.066 20:08:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.327 20:08:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:39.327 "name": "raid_bdev1", 00:12:39.327 "uuid": "e80c71b8-0e6e-4ab9-8574-410c2ed57a12", 00:12:39.327 "strip_size_kb": 0, 00:12:39.327 "state": "online", 00:12:39.327 "raid_level": "raid1", 00:12:39.327 "superblock": true, 00:12:39.327 "num_base_bdevs": 2, 00:12:39.327 "num_base_bdevs_discovered": 2, 00:12:39.327 "num_base_bdevs_operational": 2, 00:12:39.327 "process": { 00:12:39.327 "type": "rebuild", 00:12:39.327 "target": "spare", 00:12:39.327 "progress": { 00:12:39.327 "blocks": 53248, 00:12:39.327 "percent": 83 00:12:39.327 } 00:12:39.327 }, 00:12:39.327 "base_bdevs_list": [ 00:12:39.327 { 00:12:39.327 "name": "spare", 00:12:39.327 "uuid": "73a10fab-9297-5a12-9ca3-66afd748bbe7", 00:12:39.327 "is_configured": true, 00:12:39.327 "data_offset": 2048, 00:12:39.327 "data_size": 63488 00:12:39.327 }, 00:12:39.327 { 00:12:39.327 "name": "BaseBdev2", 00:12:39.327 "uuid": "c33e84bd-323c-5950-8f34-a34c82af0975", 00:12:39.327 "is_configured": true, 00:12:39.327 "data_offset": 2048, 00:12:39.327 "data_size": 63488 00:12:39.327 } 00:12:39.327 ] 00:12:39.327 }' 00:12:39.327 20:08:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:39.327 20:08:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:39.327 20:08:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:39.327 20:08:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:39.327 20:08:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:39.327 107.00 IOPS, 321.00 MiB/s [2024-12-08T20:08:11.305Z] [2024-12-08 20:08:11.284330] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:12:39.898 [2024-12-08 20:08:11.611745] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:39.898 [2024-12-08 20:08:11.711602] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:39.898 [2024-12-08 20:08:11.713449] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:40.468 20:08:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:40.468 20:08:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:40.468 20:08:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:40.468 20:08:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:40.468 20:08:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:40.468 20:08:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:40.468 20:08:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:40.468 20:08:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:40.468 20:08:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.468 20:08:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:40.468 20:08:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.468 20:08:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:40.468 "name": "raid_bdev1", 00:12:40.468 "uuid": "e80c71b8-0e6e-4ab9-8574-410c2ed57a12", 00:12:40.468 "strip_size_kb": 0, 00:12:40.468 "state": "online", 00:12:40.468 "raid_level": "raid1", 00:12:40.468 "superblock": true, 00:12:40.468 "num_base_bdevs": 2, 00:12:40.468 "num_base_bdevs_discovered": 2, 00:12:40.468 "num_base_bdevs_operational": 2, 00:12:40.468 "base_bdevs_list": [ 00:12:40.468 { 00:12:40.468 "name": "spare", 00:12:40.468 "uuid": "73a10fab-9297-5a12-9ca3-66afd748bbe7", 00:12:40.468 "is_configured": true, 00:12:40.468 "data_offset": 2048, 00:12:40.468 "data_size": 63488 00:12:40.468 }, 00:12:40.468 { 00:12:40.468 "name": "BaseBdev2", 00:12:40.468 "uuid": "c33e84bd-323c-5950-8f34-a34c82af0975", 00:12:40.468 "is_configured": true, 00:12:40.468 "data_offset": 2048, 00:12:40.468 "data_size": 63488 00:12:40.468 } 00:12:40.468 ] 00:12:40.468 }' 00:12:40.468 20:08:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:40.468 98.43 IOPS, 295.29 MiB/s [2024-12-08T20:08:12.446Z] 20:08:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:40.468 20:08:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:40.468 20:08:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:40.468 20:08:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:12:40.468 20:08:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:40.468 20:08:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:40.468 20:08:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:40.468 20:08:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:40.468 20:08:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:40.468 20:08:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:40.468 20:08:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:40.468 20:08:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.469 20:08:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:40.469 20:08:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.469 20:08:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:40.469 "name": "raid_bdev1", 00:12:40.469 "uuid": "e80c71b8-0e6e-4ab9-8574-410c2ed57a12", 00:12:40.469 "strip_size_kb": 0, 00:12:40.469 "state": "online", 00:12:40.469 "raid_level": "raid1", 00:12:40.469 "superblock": true, 00:12:40.469 "num_base_bdevs": 2, 00:12:40.469 "num_base_bdevs_discovered": 2, 00:12:40.469 "num_base_bdevs_operational": 2, 00:12:40.469 "base_bdevs_list": [ 00:12:40.469 { 00:12:40.469 "name": "spare", 00:12:40.469 "uuid": "73a10fab-9297-5a12-9ca3-66afd748bbe7", 00:12:40.469 "is_configured": true, 00:12:40.469 "data_offset": 2048, 00:12:40.469 "data_size": 63488 00:12:40.469 }, 00:12:40.469 { 00:12:40.469 "name": "BaseBdev2", 00:12:40.469 "uuid": "c33e84bd-323c-5950-8f34-a34c82af0975", 00:12:40.469 "is_configured": true, 00:12:40.469 "data_offset": 2048, 00:12:40.469 "data_size": 63488 00:12:40.469 } 00:12:40.469 ] 00:12:40.469 }' 00:12:40.469 20:08:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:40.469 20:08:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:40.469 20:08:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:40.729 20:08:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:40.729 20:08:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:40.729 20:08:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:40.729 20:08:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:40.729 20:08:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:40.729 20:08:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:40.729 20:08:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:40.729 20:08:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:40.729 20:08:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:40.729 20:08:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:40.729 20:08:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:40.729 20:08:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:40.729 20:08:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:40.729 20:08:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.729 20:08:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:40.729 20:08:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.729 20:08:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:40.729 "name": "raid_bdev1", 00:12:40.729 "uuid": "e80c71b8-0e6e-4ab9-8574-410c2ed57a12", 00:12:40.729 "strip_size_kb": 0, 00:12:40.729 "state": "online", 00:12:40.729 "raid_level": "raid1", 00:12:40.729 "superblock": true, 00:12:40.729 "num_base_bdevs": 2, 00:12:40.729 "num_base_bdevs_discovered": 2, 00:12:40.729 "num_base_bdevs_operational": 2, 00:12:40.729 "base_bdevs_list": [ 00:12:40.729 { 00:12:40.729 "name": "spare", 00:12:40.729 "uuid": "73a10fab-9297-5a12-9ca3-66afd748bbe7", 00:12:40.729 "is_configured": true, 00:12:40.729 "data_offset": 2048, 00:12:40.729 "data_size": 63488 00:12:40.729 }, 00:12:40.729 { 00:12:40.729 "name": "BaseBdev2", 00:12:40.729 "uuid": "c33e84bd-323c-5950-8f34-a34c82af0975", 00:12:40.729 "is_configured": true, 00:12:40.729 "data_offset": 2048, 00:12:40.729 "data_size": 63488 00:12:40.729 } 00:12:40.729 ] 00:12:40.729 }' 00:12:40.729 20:08:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:40.729 20:08:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:40.989 20:08:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:40.989 20:08:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.989 20:08:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:40.989 [2024-12-08 20:08:12.898403] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:40.989 [2024-12-08 20:08:12.898445] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:41.249 00:12:41.249 Latency(us) 00:12:41.249 [2024-12-08T20:08:13.227Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:41.249 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:12:41.249 raid_bdev1 : 7.75 92.09 276.28 0.00 0.00 15163.37 325.53 112641.79 00:12:41.249 [2024-12-08T20:08:13.227Z] =================================================================================================================== 00:12:41.249 [2024-12-08T20:08:13.227Z] Total : 92.09 276.28 0.00 0.00 15163.37 325.53 112641.79 00:12:41.249 [2024-12-08 20:08:13.019471] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:41.249 [2024-12-08 20:08:13.019562] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:41.249 [2024-12-08 20:08:13.019636] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:41.249 [2024-12-08 20:08:13.019660] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:41.249 { 00:12:41.249 "results": [ 00:12:41.249 { 00:12:41.249 "job": "raid_bdev1", 00:12:41.249 "core_mask": "0x1", 00:12:41.249 "workload": "randrw", 00:12:41.249 "percentage": 50, 00:12:41.249 "status": "finished", 00:12:41.249 "queue_depth": 2, 00:12:41.249 "io_size": 3145728, 00:12:41.249 "runtime": 7.752899, 00:12:41.249 "iops": 92.09458294245805, 00:12:41.249 "mibps": 276.28374882737415, 00:12:41.249 "io_failed": 0, 00:12:41.249 "io_timeout": 0, 00:12:41.249 "avg_latency_us": 15163.36790087214, 00:12:41.249 "min_latency_us": 325.5336244541485, 00:12:41.249 "max_latency_us": 112641.78864628822 00:12:41.249 } 00:12:41.249 ], 00:12:41.249 "core_count": 1 00:12:41.249 } 00:12:41.249 20:08:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.249 20:08:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:41.249 20:08:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.249 20:08:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:12:41.249 20:08:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:41.249 20:08:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.249 20:08:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:41.249 20:08:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:41.249 20:08:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:12:41.249 20:08:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:12:41.249 20:08:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:41.249 20:08:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:12:41.249 20:08:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:41.249 20:08:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:41.249 20:08:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:41.249 20:08:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:12:41.249 20:08:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:41.249 20:08:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:41.249 20:08:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:12:41.509 /dev/nbd0 00:12:41.509 20:08:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:41.509 20:08:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:41.509 20:08:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:41.509 20:08:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:12:41.509 20:08:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:41.509 20:08:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:41.509 20:08:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:41.509 20:08:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:12:41.509 20:08:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:41.509 20:08:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:41.509 20:08:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:41.509 1+0 records in 00:12:41.509 1+0 records out 00:12:41.509 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000411824 s, 9.9 MB/s 00:12:41.509 20:08:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:41.509 20:08:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:12:41.509 20:08:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:41.509 20:08:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:41.509 20:08:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:12:41.509 20:08:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:41.509 20:08:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:41.509 20:08:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:12:41.509 20:08:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:12:41.509 20:08:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:12:41.509 20:08:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:41.509 20:08:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:12:41.509 20:08:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:41.509 20:08:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:12:41.509 20:08:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:41.509 20:08:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:12:41.509 20:08:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:41.509 20:08:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:41.509 20:08:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:12:41.769 /dev/nbd1 00:12:41.769 20:08:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:41.769 20:08:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:41.769 20:08:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:12:41.769 20:08:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:12:41.769 20:08:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:41.769 20:08:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:41.769 20:08:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:12:41.769 20:08:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:12:41.769 20:08:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:41.769 20:08:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:41.769 20:08:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:41.769 1+0 records in 00:12:41.769 1+0 records out 00:12:41.769 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000373466 s, 11.0 MB/s 00:12:41.769 20:08:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:41.769 20:08:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:12:41.769 20:08:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:41.769 20:08:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:41.769 20:08:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:12:41.769 20:08:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:41.769 20:08:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:41.769 20:08:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:12:41.769 20:08:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:12:41.769 20:08:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:41.769 20:08:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:12:41.769 20:08:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:41.769 20:08:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:12:41.769 20:08:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:41.769 20:08:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:42.030 20:08:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:42.030 20:08:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:42.030 20:08:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:42.030 20:08:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:42.030 20:08:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:42.030 20:08:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:42.030 20:08:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:12:42.030 20:08:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:12:42.030 20:08:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:42.030 20:08:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:42.030 20:08:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:42.030 20:08:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:42.030 20:08:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:12:42.030 20:08:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:42.030 20:08:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:42.290 20:08:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:42.290 20:08:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:42.290 20:08:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:42.290 20:08:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:42.290 20:08:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:42.290 20:08:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:42.290 20:08:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:12:42.290 20:08:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:12:42.290 20:08:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:12:42.290 20:08:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:12:42.290 20:08:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.290 20:08:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:42.290 20:08:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.290 20:08:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:42.290 20:08:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.290 20:08:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:42.290 [2024-12-08 20:08:14.195904] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:42.290 [2024-12-08 20:08:14.196262] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:42.290 [2024-12-08 20:08:14.196347] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:12:42.290 [2024-12-08 20:08:14.196400] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:42.290 [2024-12-08 20:08:14.198544] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:42.290 [2024-12-08 20:08:14.198657] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:42.290 [2024-12-08 20:08:14.198786] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:12:42.290 [2024-12-08 20:08:14.198838] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:42.290 [2024-12-08 20:08:14.199001] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:42.290 spare 00:12:42.290 20:08:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.290 20:08:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:12:42.290 20:08:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.290 20:08:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:42.551 [2024-12-08 20:08:14.298928] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:12:42.551 [2024-12-08 20:08:14.298961] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:42.551 [2024-12-08 20:08:14.299257] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b0d0 00:12:42.551 [2024-12-08 20:08:14.299461] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:12:42.551 [2024-12-08 20:08:14.299481] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:12:42.551 [2024-12-08 20:08:14.299701] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:42.551 20:08:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.551 20:08:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:42.551 20:08:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:42.551 20:08:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:42.551 20:08:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:42.551 20:08:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:42.551 20:08:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:42.551 20:08:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:42.551 20:08:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:42.551 20:08:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:42.551 20:08:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:42.551 20:08:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:42.551 20:08:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:42.551 20:08:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.552 20:08:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:42.552 20:08:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.552 20:08:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:42.552 "name": "raid_bdev1", 00:12:42.552 "uuid": "e80c71b8-0e6e-4ab9-8574-410c2ed57a12", 00:12:42.552 "strip_size_kb": 0, 00:12:42.552 "state": "online", 00:12:42.552 "raid_level": "raid1", 00:12:42.552 "superblock": true, 00:12:42.552 "num_base_bdevs": 2, 00:12:42.552 "num_base_bdevs_discovered": 2, 00:12:42.552 "num_base_bdevs_operational": 2, 00:12:42.552 "base_bdevs_list": [ 00:12:42.552 { 00:12:42.552 "name": "spare", 00:12:42.552 "uuid": "73a10fab-9297-5a12-9ca3-66afd748bbe7", 00:12:42.552 "is_configured": true, 00:12:42.552 "data_offset": 2048, 00:12:42.552 "data_size": 63488 00:12:42.552 }, 00:12:42.552 { 00:12:42.552 "name": "BaseBdev2", 00:12:42.552 "uuid": "c33e84bd-323c-5950-8f34-a34c82af0975", 00:12:42.552 "is_configured": true, 00:12:42.552 "data_offset": 2048, 00:12:42.552 "data_size": 63488 00:12:42.552 } 00:12:42.552 ] 00:12:42.552 }' 00:12:42.552 20:08:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:42.552 20:08:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:42.811 20:08:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:42.811 20:08:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:42.811 20:08:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:42.811 20:08:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:42.811 20:08:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:42.811 20:08:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:42.811 20:08:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:42.811 20:08:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.811 20:08:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:42.811 20:08:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.072 20:08:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:43.072 "name": "raid_bdev1", 00:12:43.072 "uuid": "e80c71b8-0e6e-4ab9-8574-410c2ed57a12", 00:12:43.072 "strip_size_kb": 0, 00:12:43.072 "state": "online", 00:12:43.072 "raid_level": "raid1", 00:12:43.072 "superblock": true, 00:12:43.072 "num_base_bdevs": 2, 00:12:43.072 "num_base_bdevs_discovered": 2, 00:12:43.072 "num_base_bdevs_operational": 2, 00:12:43.072 "base_bdevs_list": [ 00:12:43.072 { 00:12:43.072 "name": "spare", 00:12:43.072 "uuid": "73a10fab-9297-5a12-9ca3-66afd748bbe7", 00:12:43.072 "is_configured": true, 00:12:43.072 "data_offset": 2048, 00:12:43.072 "data_size": 63488 00:12:43.072 }, 00:12:43.072 { 00:12:43.072 "name": "BaseBdev2", 00:12:43.072 "uuid": "c33e84bd-323c-5950-8f34-a34c82af0975", 00:12:43.072 "is_configured": true, 00:12:43.072 "data_offset": 2048, 00:12:43.072 "data_size": 63488 00:12:43.072 } 00:12:43.072 ] 00:12:43.072 }' 00:12:43.072 20:08:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:43.072 20:08:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:43.072 20:08:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:43.072 20:08:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:43.072 20:08:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:43.072 20:08:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.072 20:08:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:43.072 20:08:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:12:43.072 20:08:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.072 20:08:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:12:43.072 20:08:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:43.072 20:08:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.072 20:08:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:43.072 [2024-12-08 20:08:14.942870] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:43.072 20:08:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.072 20:08:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:43.072 20:08:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:43.072 20:08:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:43.072 20:08:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:43.072 20:08:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:43.072 20:08:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:43.072 20:08:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:43.072 20:08:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:43.072 20:08:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:43.072 20:08:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:43.072 20:08:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:43.072 20:08:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.072 20:08:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:43.072 20:08:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:43.072 20:08:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.072 20:08:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:43.072 "name": "raid_bdev1", 00:12:43.072 "uuid": "e80c71b8-0e6e-4ab9-8574-410c2ed57a12", 00:12:43.072 "strip_size_kb": 0, 00:12:43.072 "state": "online", 00:12:43.072 "raid_level": "raid1", 00:12:43.072 "superblock": true, 00:12:43.072 "num_base_bdevs": 2, 00:12:43.072 "num_base_bdevs_discovered": 1, 00:12:43.072 "num_base_bdevs_operational": 1, 00:12:43.072 "base_bdevs_list": [ 00:12:43.072 { 00:12:43.072 "name": null, 00:12:43.072 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:43.072 "is_configured": false, 00:12:43.072 "data_offset": 0, 00:12:43.072 "data_size": 63488 00:12:43.072 }, 00:12:43.072 { 00:12:43.072 "name": "BaseBdev2", 00:12:43.072 "uuid": "c33e84bd-323c-5950-8f34-a34c82af0975", 00:12:43.072 "is_configured": true, 00:12:43.072 "data_offset": 2048, 00:12:43.072 "data_size": 63488 00:12:43.072 } 00:12:43.072 ] 00:12:43.072 }' 00:12:43.072 20:08:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:43.072 20:08:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:43.640 20:08:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:43.641 20:08:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.641 20:08:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:43.641 [2024-12-08 20:08:15.442092] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:43.641 [2024-12-08 20:08:15.442302] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:12:43.641 [2024-12-08 20:08:15.442321] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:12:43.641 [2024-12-08 20:08:15.442716] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:43.641 [2024-12-08 20:08:15.459198] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b1a0 00:12:43.641 20:08:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.641 20:08:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:12:43.641 [2024-12-08 20:08:15.461113] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:44.579 20:08:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:44.579 20:08:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:44.579 20:08:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:44.579 20:08:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:44.579 20:08:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:44.579 20:08:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:44.579 20:08:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:44.579 20:08:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.579 20:08:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:44.579 20:08:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.579 20:08:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:44.579 "name": "raid_bdev1", 00:12:44.579 "uuid": "e80c71b8-0e6e-4ab9-8574-410c2ed57a12", 00:12:44.579 "strip_size_kb": 0, 00:12:44.579 "state": "online", 00:12:44.579 "raid_level": "raid1", 00:12:44.579 "superblock": true, 00:12:44.579 "num_base_bdevs": 2, 00:12:44.579 "num_base_bdevs_discovered": 2, 00:12:44.579 "num_base_bdevs_operational": 2, 00:12:44.579 "process": { 00:12:44.579 "type": "rebuild", 00:12:44.579 "target": "spare", 00:12:44.579 "progress": { 00:12:44.579 "blocks": 20480, 00:12:44.579 "percent": 32 00:12:44.579 } 00:12:44.579 }, 00:12:44.579 "base_bdevs_list": [ 00:12:44.579 { 00:12:44.579 "name": "spare", 00:12:44.579 "uuid": "73a10fab-9297-5a12-9ca3-66afd748bbe7", 00:12:44.579 "is_configured": true, 00:12:44.579 "data_offset": 2048, 00:12:44.579 "data_size": 63488 00:12:44.579 }, 00:12:44.579 { 00:12:44.579 "name": "BaseBdev2", 00:12:44.579 "uuid": "c33e84bd-323c-5950-8f34-a34c82af0975", 00:12:44.579 "is_configured": true, 00:12:44.579 "data_offset": 2048, 00:12:44.579 "data_size": 63488 00:12:44.579 } 00:12:44.579 ] 00:12:44.579 }' 00:12:44.579 20:08:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:44.839 20:08:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:44.839 20:08:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:44.839 20:08:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:44.839 20:08:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:12:44.839 20:08:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.839 20:08:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:44.839 [2024-12-08 20:08:16.596842] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:44.839 [2024-12-08 20:08:16.667090] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:44.839 [2024-12-08 20:08:16.667166] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:44.839 [2024-12-08 20:08:16.667185] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:44.839 [2024-12-08 20:08:16.667192] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:44.839 20:08:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.839 20:08:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:44.839 20:08:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:44.839 20:08:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:44.839 20:08:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:44.839 20:08:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:44.839 20:08:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:44.839 20:08:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:44.839 20:08:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:44.839 20:08:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:44.839 20:08:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:44.839 20:08:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:44.839 20:08:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.839 20:08:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:44.839 20:08:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:44.839 20:08:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.839 20:08:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:44.839 "name": "raid_bdev1", 00:12:44.839 "uuid": "e80c71b8-0e6e-4ab9-8574-410c2ed57a12", 00:12:44.839 "strip_size_kb": 0, 00:12:44.839 "state": "online", 00:12:44.839 "raid_level": "raid1", 00:12:44.839 "superblock": true, 00:12:44.839 "num_base_bdevs": 2, 00:12:44.839 "num_base_bdevs_discovered": 1, 00:12:44.839 "num_base_bdevs_operational": 1, 00:12:44.839 "base_bdevs_list": [ 00:12:44.839 { 00:12:44.839 "name": null, 00:12:44.839 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:44.839 "is_configured": false, 00:12:44.839 "data_offset": 0, 00:12:44.839 "data_size": 63488 00:12:44.839 }, 00:12:44.839 { 00:12:44.839 "name": "BaseBdev2", 00:12:44.839 "uuid": "c33e84bd-323c-5950-8f34-a34c82af0975", 00:12:44.839 "is_configured": true, 00:12:44.839 "data_offset": 2048, 00:12:44.839 "data_size": 63488 00:12:44.839 } 00:12:44.839 ] 00:12:44.839 }' 00:12:44.839 20:08:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:44.839 20:08:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:45.408 20:08:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:45.408 20:08:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.408 20:08:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:45.408 [2024-12-08 20:08:17.109393] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:45.408 [2024-12-08 20:08:17.109454] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:45.408 [2024-12-08 20:08:17.109478] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:12:45.408 [2024-12-08 20:08:17.109488] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:45.408 [2024-12-08 20:08:17.109978] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:45.408 [2024-12-08 20:08:17.109996] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:45.408 [2024-12-08 20:08:17.110100] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:12:45.408 [2024-12-08 20:08:17.110115] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:12:45.408 [2024-12-08 20:08:17.110126] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:12:45.408 [2024-12-08 20:08:17.110145] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:45.408 spare 00:12:45.408 [2024-12-08 20:08:17.126339] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b270 00:12:45.408 20:08:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.408 20:08:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:12:45.408 [2024-12-08 20:08:17.128198] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:46.347 20:08:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:46.347 20:08:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:46.347 20:08:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:46.347 20:08:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:46.347 20:08:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:46.347 20:08:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:46.347 20:08:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.347 20:08:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:46.347 20:08:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:46.347 20:08:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.347 20:08:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:46.347 "name": "raid_bdev1", 00:12:46.347 "uuid": "e80c71b8-0e6e-4ab9-8574-410c2ed57a12", 00:12:46.347 "strip_size_kb": 0, 00:12:46.347 "state": "online", 00:12:46.347 "raid_level": "raid1", 00:12:46.347 "superblock": true, 00:12:46.347 "num_base_bdevs": 2, 00:12:46.347 "num_base_bdevs_discovered": 2, 00:12:46.347 "num_base_bdevs_operational": 2, 00:12:46.347 "process": { 00:12:46.347 "type": "rebuild", 00:12:46.347 "target": "spare", 00:12:46.347 "progress": { 00:12:46.347 "blocks": 20480, 00:12:46.347 "percent": 32 00:12:46.347 } 00:12:46.347 }, 00:12:46.347 "base_bdevs_list": [ 00:12:46.347 { 00:12:46.347 "name": "spare", 00:12:46.347 "uuid": "73a10fab-9297-5a12-9ca3-66afd748bbe7", 00:12:46.347 "is_configured": true, 00:12:46.347 "data_offset": 2048, 00:12:46.347 "data_size": 63488 00:12:46.347 }, 00:12:46.347 { 00:12:46.347 "name": "BaseBdev2", 00:12:46.347 "uuid": "c33e84bd-323c-5950-8f34-a34c82af0975", 00:12:46.347 "is_configured": true, 00:12:46.347 "data_offset": 2048, 00:12:46.347 "data_size": 63488 00:12:46.347 } 00:12:46.347 ] 00:12:46.347 }' 00:12:46.347 20:08:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:46.347 20:08:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:46.347 20:08:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:46.347 20:08:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:46.347 20:08:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:12:46.347 20:08:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.347 20:08:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:46.347 [2024-12-08 20:08:18.284362] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:46.606 [2024-12-08 20:08:18.333265] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:46.606 [2024-12-08 20:08:18.333318] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:46.606 [2024-12-08 20:08:18.333331] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:46.606 [2024-12-08 20:08:18.333343] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:46.606 20:08:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.606 20:08:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:46.606 20:08:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:46.606 20:08:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:46.606 20:08:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:46.606 20:08:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:46.606 20:08:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:46.606 20:08:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:46.606 20:08:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:46.606 20:08:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:46.606 20:08:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:46.606 20:08:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:46.606 20:08:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.606 20:08:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:46.606 20:08:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:46.606 20:08:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.606 20:08:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:46.606 "name": "raid_bdev1", 00:12:46.606 "uuid": "e80c71b8-0e6e-4ab9-8574-410c2ed57a12", 00:12:46.606 "strip_size_kb": 0, 00:12:46.606 "state": "online", 00:12:46.606 "raid_level": "raid1", 00:12:46.606 "superblock": true, 00:12:46.606 "num_base_bdevs": 2, 00:12:46.606 "num_base_bdevs_discovered": 1, 00:12:46.606 "num_base_bdevs_operational": 1, 00:12:46.606 "base_bdevs_list": [ 00:12:46.606 { 00:12:46.606 "name": null, 00:12:46.606 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:46.606 "is_configured": false, 00:12:46.607 "data_offset": 0, 00:12:46.607 "data_size": 63488 00:12:46.607 }, 00:12:46.607 { 00:12:46.607 "name": "BaseBdev2", 00:12:46.607 "uuid": "c33e84bd-323c-5950-8f34-a34c82af0975", 00:12:46.607 "is_configured": true, 00:12:46.607 "data_offset": 2048, 00:12:46.607 "data_size": 63488 00:12:46.607 } 00:12:46.607 ] 00:12:46.607 }' 00:12:46.607 20:08:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:46.607 20:08:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:47.177 20:08:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:47.177 20:08:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:47.177 20:08:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:47.177 20:08:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:47.177 20:08:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:47.177 20:08:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:47.177 20:08:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.177 20:08:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:47.177 20:08:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:47.177 20:08:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.177 20:08:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:47.177 "name": "raid_bdev1", 00:12:47.177 "uuid": "e80c71b8-0e6e-4ab9-8574-410c2ed57a12", 00:12:47.177 "strip_size_kb": 0, 00:12:47.177 "state": "online", 00:12:47.177 "raid_level": "raid1", 00:12:47.177 "superblock": true, 00:12:47.177 "num_base_bdevs": 2, 00:12:47.177 "num_base_bdevs_discovered": 1, 00:12:47.177 "num_base_bdevs_operational": 1, 00:12:47.177 "base_bdevs_list": [ 00:12:47.177 { 00:12:47.177 "name": null, 00:12:47.177 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:47.177 "is_configured": false, 00:12:47.177 "data_offset": 0, 00:12:47.177 "data_size": 63488 00:12:47.177 }, 00:12:47.177 { 00:12:47.177 "name": "BaseBdev2", 00:12:47.177 "uuid": "c33e84bd-323c-5950-8f34-a34c82af0975", 00:12:47.177 "is_configured": true, 00:12:47.177 "data_offset": 2048, 00:12:47.177 "data_size": 63488 00:12:47.177 } 00:12:47.177 ] 00:12:47.177 }' 00:12:47.177 20:08:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:47.177 20:08:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:47.177 20:08:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:47.177 20:08:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:47.177 20:08:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:12:47.177 20:08:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.177 20:08:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:47.177 20:08:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.177 20:08:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:47.177 20:08:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.177 20:08:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:47.177 [2024-12-08 20:08:19.009275] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:47.177 [2024-12-08 20:08:19.009374] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:47.177 [2024-12-08 20:08:19.009419] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:12:47.177 [2024-12-08 20:08:19.009456] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:47.177 [2024-12-08 20:08:19.009971] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:47.177 [2024-12-08 20:08:19.010028] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:47.177 [2024-12-08 20:08:19.010151] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:12:47.177 [2024-12-08 20:08:19.010199] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:12:47.177 [2024-12-08 20:08:19.010244] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:12:47.177 [2024-12-08 20:08:19.010307] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:12:47.177 BaseBdev1 00:12:47.177 20:08:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.177 20:08:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:12:48.117 20:08:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:48.117 20:08:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:48.117 20:08:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:48.117 20:08:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:48.117 20:08:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:48.117 20:08:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:48.117 20:08:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:48.117 20:08:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:48.117 20:08:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:48.117 20:08:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:48.117 20:08:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:48.117 20:08:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:48.117 20:08:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.117 20:08:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:48.117 20:08:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.117 20:08:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:48.117 "name": "raid_bdev1", 00:12:48.117 "uuid": "e80c71b8-0e6e-4ab9-8574-410c2ed57a12", 00:12:48.117 "strip_size_kb": 0, 00:12:48.117 "state": "online", 00:12:48.117 "raid_level": "raid1", 00:12:48.117 "superblock": true, 00:12:48.117 "num_base_bdevs": 2, 00:12:48.117 "num_base_bdevs_discovered": 1, 00:12:48.117 "num_base_bdevs_operational": 1, 00:12:48.117 "base_bdevs_list": [ 00:12:48.117 { 00:12:48.117 "name": null, 00:12:48.117 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:48.117 "is_configured": false, 00:12:48.117 "data_offset": 0, 00:12:48.117 "data_size": 63488 00:12:48.117 }, 00:12:48.117 { 00:12:48.117 "name": "BaseBdev2", 00:12:48.117 "uuid": "c33e84bd-323c-5950-8f34-a34c82af0975", 00:12:48.117 "is_configured": true, 00:12:48.117 "data_offset": 2048, 00:12:48.117 "data_size": 63488 00:12:48.117 } 00:12:48.117 ] 00:12:48.117 }' 00:12:48.117 20:08:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:48.117 20:08:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:48.684 20:08:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:48.684 20:08:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:48.684 20:08:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:48.684 20:08:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:48.684 20:08:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:48.684 20:08:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:48.684 20:08:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:48.684 20:08:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.684 20:08:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:48.684 20:08:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.684 20:08:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:48.684 "name": "raid_bdev1", 00:12:48.684 "uuid": "e80c71b8-0e6e-4ab9-8574-410c2ed57a12", 00:12:48.684 "strip_size_kb": 0, 00:12:48.684 "state": "online", 00:12:48.684 "raid_level": "raid1", 00:12:48.684 "superblock": true, 00:12:48.684 "num_base_bdevs": 2, 00:12:48.684 "num_base_bdevs_discovered": 1, 00:12:48.684 "num_base_bdevs_operational": 1, 00:12:48.684 "base_bdevs_list": [ 00:12:48.684 { 00:12:48.684 "name": null, 00:12:48.684 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:48.684 "is_configured": false, 00:12:48.684 "data_offset": 0, 00:12:48.684 "data_size": 63488 00:12:48.684 }, 00:12:48.684 { 00:12:48.684 "name": "BaseBdev2", 00:12:48.684 "uuid": "c33e84bd-323c-5950-8f34-a34c82af0975", 00:12:48.684 "is_configured": true, 00:12:48.684 "data_offset": 2048, 00:12:48.684 "data_size": 63488 00:12:48.684 } 00:12:48.684 ] 00:12:48.684 }' 00:12:48.684 20:08:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:48.684 20:08:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:48.684 20:08:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:48.684 20:08:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:48.684 20:08:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:48.684 20:08:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # local es=0 00:12:48.684 20:08:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:48.684 20:08:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:12:48.684 20:08:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:48.684 20:08:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:12:48.684 20:08:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:48.684 20:08:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:48.684 20:08:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.684 20:08:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:48.684 [2024-12-08 20:08:20.598834] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:48.684 [2024-12-08 20:08:20.599082] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:12:48.684 [2024-12-08 20:08:20.599141] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:12:48.684 request: 00:12:48.684 { 00:12:48.684 "base_bdev": "BaseBdev1", 00:12:48.684 "raid_bdev": "raid_bdev1", 00:12:48.684 "method": "bdev_raid_add_base_bdev", 00:12:48.684 "req_id": 1 00:12:48.684 } 00:12:48.684 Got JSON-RPC error response 00:12:48.684 response: 00:12:48.684 { 00:12:48.684 "code": -22, 00:12:48.684 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:12:48.684 } 00:12:48.684 20:08:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:12:48.684 20:08:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # es=1 00:12:48.684 20:08:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:48.684 20:08:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:48.684 20:08:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:48.684 20:08:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:12:50.063 20:08:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:50.063 20:08:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:50.063 20:08:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:50.063 20:08:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:50.063 20:08:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:50.063 20:08:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:50.063 20:08:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:50.063 20:08:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:50.063 20:08:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:50.063 20:08:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:50.064 20:08:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.064 20:08:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:50.064 20:08:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.064 20:08:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:50.064 20:08:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.064 20:08:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:50.064 "name": "raid_bdev1", 00:12:50.064 "uuid": "e80c71b8-0e6e-4ab9-8574-410c2ed57a12", 00:12:50.064 "strip_size_kb": 0, 00:12:50.064 "state": "online", 00:12:50.064 "raid_level": "raid1", 00:12:50.064 "superblock": true, 00:12:50.064 "num_base_bdevs": 2, 00:12:50.064 "num_base_bdevs_discovered": 1, 00:12:50.064 "num_base_bdevs_operational": 1, 00:12:50.064 "base_bdevs_list": [ 00:12:50.064 { 00:12:50.064 "name": null, 00:12:50.064 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:50.064 "is_configured": false, 00:12:50.064 "data_offset": 0, 00:12:50.064 "data_size": 63488 00:12:50.064 }, 00:12:50.064 { 00:12:50.064 "name": "BaseBdev2", 00:12:50.064 "uuid": "c33e84bd-323c-5950-8f34-a34c82af0975", 00:12:50.064 "is_configured": true, 00:12:50.064 "data_offset": 2048, 00:12:50.064 "data_size": 63488 00:12:50.064 } 00:12:50.064 ] 00:12:50.064 }' 00:12:50.064 20:08:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:50.064 20:08:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:50.322 20:08:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:50.323 20:08:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:50.323 20:08:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:50.323 20:08:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:50.323 20:08:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:50.323 20:08:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.323 20:08:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.323 20:08:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:50.323 20:08:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:50.323 20:08:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.323 20:08:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:50.323 "name": "raid_bdev1", 00:12:50.323 "uuid": "e80c71b8-0e6e-4ab9-8574-410c2ed57a12", 00:12:50.323 "strip_size_kb": 0, 00:12:50.323 "state": "online", 00:12:50.323 "raid_level": "raid1", 00:12:50.323 "superblock": true, 00:12:50.323 "num_base_bdevs": 2, 00:12:50.323 "num_base_bdevs_discovered": 1, 00:12:50.323 "num_base_bdevs_operational": 1, 00:12:50.323 "base_bdevs_list": [ 00:12:50.323 { 00:12:50.323 "name": null, 00:12:50.323 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:50.323 "is_configured": false, 00:12:50.323 "data_offset": 0, 00:12:50.323 "data_size": 63488 00:12:50.323 }, 00:12:50.323 { 00:12:50.323 "name": "BaseBdev2", 00:12:50.323 "uuid": "c33e84bd-323c-5950-8f34-a34c82af0975", 00:12:50.323 "is_configured": true, 00:12:50.323 "data_offset": 2048, 00:12:50.323 "data_size": 63488 00:12:50.323 } 00:12:50.323 ] 00:12:50.323 }' 00:12:50.323 20:08:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:50.323 20:08:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:50.323 20:08:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:50.323 20:08:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:50.323 20:08:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 76586 00:12:50.323 20:08:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # '[' -z 76586 ']' 00:12:50.323 20:08:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # kill -0 76586 00:12:50.323 20:08:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # uname 00:12:50.323 20:08:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:50.323 20:08:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76586 00:12:50.323 20:08:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:50.323 20:08:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:50.323 killing process with pid 76586 00:12:50.323 Received shutdown signal, test time was about 17.017966 seconds 00:12:50.323 00:12:50.323 Latency(us) 00:12:50.323 [2024-12-08T20:08:22.301Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:50.323 [2024-12-08T20:08:22.301Z] =================================================================================================================== 00:12:50.323 [2024-12-08T20:08:22.301Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:50.323 20:08:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76586' 00:12:50.323 20:08:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@973 -- # kill 76586 00:12:50.323 [2024-12-08 20:08:22.245393] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:50.323 [2024-12-08 20:08:22.245517] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:50.323 20:08:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@978 -- # wait 76586 00:12:50.323 [2024-12-08 20:08:22.245573] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:50.323 [2024-12-08 20:08:22.245583] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:12:50.581 [2024-12-08 20:08:22.463259] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:51.959 20:08:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:12:51.959 00:12:51.959 real 0m19.948s 00:12:51.959 user 0m25.978s 00:12:51.959 sys 0m2.161s 00:12:51.959 20:08:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:51.959 ************************************ 00:12:51.959 END TEST raid_rebuild_test_sb_io 00:12:51.959 ************************************ 00:12:51.959 20:08:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:51.959 20:08:23 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:12:51.959 20:08:23 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false true 00:12:51.959 20:08:23 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:12:51.959 20:08:23 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:51.959 20:08:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:51.959 ************************************ 00:12:51.959 START TEST raid_rebuild_test 00:12:51.959 ************************************ 00:12:51.959 20:08:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 false false true 00:12:51.959 20:08:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:51.959 20:08:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:12:51.959 20:08:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:12:51.959 20:08:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:12:51.959 20:08:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:51.959 20:08:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:51.959 20:08:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:51.959 20:08:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:51.959 20:08:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:51.959 20:08:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:51.959 20:08:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:51.959 20:08:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:51.959 20:08:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:51.959 20:08:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:12:51.959 20:08:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:51.959 20:08:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:51.959 20:08:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:12:51.959 20:08:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:51.959 20:08:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:51.959 20:08:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:51.959 20:08:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:51.959 20:08:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:51.959 20:08:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:51.959 20:08:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:51.959 20:08:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:51.959 20:08:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:51.959 20:08:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:51.959 20:08:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:51.959 20:08:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:12:51.959 20:08:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=77275 00:12:51.959 20:08:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:51.959 20:08:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 77275 00:12:51.959 20:08:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 77275 ']' 00:12:51.959 20:08:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:51.959 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:51.959 20:08:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:51.959 20:08:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:51.959 20:08:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:51.959 20:08:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.959 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:51.959 Zero copy mechanism will not be used. 00:12:51.959 [2024-12-08 20:08:23.758382] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:12:51.959 [2024-12-08 20:08:23.758497] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77275 ] 00:12:51.959 [2024-12-08 20:08:23.931889] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:52.219 [2024-12-08 20:08:24.037841] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:52.477 [2024-12-08 20:08:24.225445] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:52.477 [2024-12-08 20:08:24.225476] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:52.741 20:08:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:52.741 20:08:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:12:52.741 20:08:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:52.741 20:08:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:52.741 20:08:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.741 20:08:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.741 BaseBdev1_malloc 00:12:52.741 20:08:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.741 20:08:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:52.741 20:08:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.741 20:08:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.741 [2024-12-08 20:08:24.622857] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:52.741 [2024-12-08 20:08:24.622912] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:52.741 [2024-12-08 20:08:24.622951] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:52.741 [2024-12-08 20:08:24.622974] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:52.741 [2024-12-08 20:08:24.624986] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:52.741 [2024-12-08 20:08:24.625019] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:52.741 BaseBdev1 00:12:52.741 20:08:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.741 20:08:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:52.741 20:08:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:52.741 20:08:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.741 20:08:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.741 BaseBdev2_malloc 00:12:52.741 20:08:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.741 20:08:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:52.741 20:08:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.741 20:08:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.741 [2024-12-08 20:08:24.674828] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:52.741 [2024-12-08 20:08:24.674881] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:52.742 [2024-12-08 20:08:24.674903] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:52.742 [2024-12-08 20:08:24.674913] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:52.742 [2024-12-08 20:08:24.676907] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:52.742 [2024-12-08 20:08:24.677003] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:52.742 BaseBdev2 00:12:52.742 20:08:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.742 20:08:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:52.742 20:08:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:52.742 20:08:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.742 20:08:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.005 BaseBdev3_malloc 00:12:53.005 20:08:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.005 20:08:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:12:53.005 20:08:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.005 20:08:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.005 [2024-12-08 20:08:24.741587] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:12:53.005 [2024-12-08 20:08:24.741638] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:53.005 [2024-12-08 20:08:24.741659] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:53.005 [2024-12-08 20:08:24.741668] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:53.005 [2024-12-08 20:08:24.743647] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:53.005 [2024-12-08 20:08:24.743686] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:53.005 BaseBdev3 00:12:53.005 20:08:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.005 20:08:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:53.005 20:08:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:53.005 20:08:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.005 20:08:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.005 BaseBdev4_malloc 00:12:53.005 20:08:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.005 20:08:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:12:53.005 20:08:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.005 20:08:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.005 [2024-12-08 20:08:24.795466] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:12:53.005 [2024-12-08 20:08:24.795562] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:53.005 [2024-12-08 20:08:24.795601] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:12:53.005 [2024-12-08 20:08:24.795632] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:53.005 [2024-12-08 20:08:24.797626] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:53.005 [2024-12-08 20:08:24.797699] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:53.005 BaseBdev4 00:12:53.005 20:08:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.005 20:08:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:53.005 20:08:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.005 20:08:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.005 spare_malloc 00:12:53.005 20:08:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.005 20:08:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:53.005 20:08:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.005 20:08:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.005 spare_delay 00:12:53.005 20:08:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.005 20:08:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:53.005 20:08:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.005 20:08:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.005 [2024-12-08 20:08:24.861071] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:53.005 [2024-12-08 20:08:24.861155] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:53.005 [2024-12-08 20:08:24.861191] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:12:53.005 [2024-12-08 20:08:24.861202] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:53.005 [2024-12-08 20:08:24.863179] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:53.005 [2024-12-08 20:08:24.863215] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:53.005 spare 00:12:53.005 20:08:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.005 20:08:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:12:53.005 20:08:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.005 20:08:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.005 [2024-12-08 20:08:24.873095] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:53.005 [2024-12-08 20:08:24.874795] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:53.005 [2024-12-08 20:08:24.874857] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:53.005 [2024-12-08 20:08:24.874907] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:53.005 [2024-12-08 20:08:24.874992] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:53.005 [2024-12-08 20:08:24.875006] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:53.005 [2024-12-08 20:08:24.875271] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:53.005 [2024-12-08 20:08:24.875443] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:53.005 [2024-12-08 20:08:24.875456] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:53.005 [2024-12-08 20:08:24.875591] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:53.005 20:08:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.005 20:08:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:53.005 20:08:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:53.005 20:08:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:53.005 20:08:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:53.005 20:08:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:53.005 20:08:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:53.005 20:08:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:53.005 20:08:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:53.005 20:08:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:53.005 20:08:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:53.005 20:08:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:53.005 20:08:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:53.005 20:08:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.005 20:08:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.005 20:08:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.005 20:08:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:53.005 "name": "raid_bdev1", 00:12:53.005 "uuid": "4d027e77-85ac-4c56-ac1f-36b95f221da7", 00:12:53.005 "strip_size_kb": 0, 00:12:53.005 "state": "online", 00:12:53.005 "raid_level": "raid1", 00:12:53.005 "superblock": false, 00:12:53.005 "num_base_bdevs": 4, 00:12:53.005 "num_base_bdevs_discovered": 4, 00:12:53.005 "num_base_bdevs_operational": 4, 00:12:53.005 "base_bdevs_list": [ 00:12:53.005 { 00:12:53.005 "name": "BaseBdev1", 00:12:53.005 "uuid": "f2f3b595-7b73-5ff1-b9f9-74c18bf41408", 00:12:53.005 "is_configured": true, 00:12:53.005 "data_offset": 0, 00:12:53.005 "data_size": 65536 00:12:53.005 }, 00:12:53.005 { 00:12:53.005 "name": "BaseBdev2", 00:12:53.005 "uuid": "dbbfb223-1451-5545-ba90-7493ed73891f", 00:12:53.005 "is_configured": true, 00:12:53.005 "data_offset": 0, 00:12:53.005 "data_size": 65536 00:12:53.005 }, 00:12:53.005 { 00:12:53.005 "name": "BaseBdev3", 00:12:53.005 "uuid": "76015ed7-c69c-597a-826f-b5c48cb510da", 00:12:53.005 "is_configured": true, 00:12:53.005 "data_offset": 0, 00:12:53.005 "data_size": 65536 00:12:53.005 }, 00:12:53.005 { 00:12:53.005 "name": "BaseBdev4", 00:12:53.005 "uuid": "01c0ec0d-162a-5191-a8d4-61023987f16d", 00:12:53.005 "is_configured": true, 00:12:53.005 "data_offset": 0, 00:12:53.005 "data_size": 65536 00:12:53.005 } 00:12:53.005 ] 00:12:53.005 }' 00:12:53.005 20:08:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:53.005 20:08:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.575 20:08:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:53.575 20:08:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:53.575 20:08:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.575 20:08:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.575 [2024-12-08 20:08:25.332728] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:53.575 20:08:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.575 20:08:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:12:53.575 20:08:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:53.575 20:08:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:53.575 20:08:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.575 20:08:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.575 20:08:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.575 20:08:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:12:53.575 20:08:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:12:53.575 20:08:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:12:53.575 20:08:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:12:53.575 20:08:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:12:53.575 20:08:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:53.575 20:08:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:12:53.575 20:08:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:53.575 20:08:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:53.575 20:08:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:53.575 20:08:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:12:53.575 20:08:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:53.575 20:08:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:53.575 20:08:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:12:53.834 [2024-12-08 20:08:25.608002] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:12:53.834 /dev/nbd0 00:12:53.834 20:08:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:53.834 20:08:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:53.834 20:08:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:53.834 20:08:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:12:53.834 20:08:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:53.834 20:08:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:53.834 20:08:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:53.834 20:08:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:12:53.834 20:08:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:53.834 20:08:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:53.834 20:08:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:53.834 1+0 records in 00:12:53.834 1+0 records out 00:12:53.834 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00024219 s, 16.9 MB/s 00:12:53.834 20:08:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:53.834 20:08:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:12:53.834 20:08:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:53.834 20:08:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:53.834 20:08:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:12:53.834 20:08:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:53.834 20:08:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:53.834 20:08:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:12:53.834 20:08:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:12:53.834 20:08:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:12:59.110 65536+0 records in 00:12:59.110 65536+0 records out 00:12:59.110 33554432 bytes (34 MB, 32 MiB) copied, 5.17475 s, 6.5 MB/s 00:12:59.110 20:08:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:59.110 20:08:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:59.110 20:08:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:59.110 20:08:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:59.110 20:08:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:12:59.110 20:08:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:59.110 20:08:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:59.369 20:08:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:59.369 [2024-12-08 20:08:31.096026] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:59.369 20:08:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:59.369 20:08:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:59.369 20:08:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:59.369 20:08:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:59.369 20:08:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:59.369 20:08:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:12:59.369 20:08:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:12:59.369 20:08:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:59.369 20:08:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.369 20:08:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.369 [2024-12-08 20:08:31.108099] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:59.369 20:08:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.369 20:08:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:59.369 20:08:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:59.369 20:08:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:59.369 20:08:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:59.369 20:08:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:59.369 20:08:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:59.369 20:08:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:59.369 20:08:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:59.369 20:08:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:59.369 20:08:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:59.369 20:08:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:59.370 20:08:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.370 20:08:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:59.370 20:08:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.370 20:08:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.370 20:08:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:59.370 "name": "raid_bdev1", 00:12:59.370 "uuid": "4d027e77-85ac-4c56-ac1f-36b95f221da7", 00:12:59.370 "strip_size_kb": 0, 00:12:59.370 "state": "online", 00:12:59.370 "raid_level": "raid1", 00:12:59.370 "superblock": false, 00:12:59.370 "num_base_bdevs": 4, 00:12:59.370 "num_base_bdevs_discovered": 3, 00:12:59.370 "num_base_bdevs_operational": 3, 00:12:59.370 "base_bdevs_list": [ 00:12:59.370 { 00:12:59.370 "name": null, 00:12:59.370 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:59.370 "is_configured": false, 00:12:59.370 "data_offset": 0, 00:12:59.370 "data_size": 65536 00:12:59.370 }, 00:12:59.370 { 00:12:59.370 "name": "BaseBdev2", 00:12:59.370 "uuid": "dbbfb223-1451-5545-ba90-7493ed73891f", 00:12:59.370 "is_configured": true, 00:12:59.370 "data_offset": 0, 00:12:59.370 "data_size": 65536 00:12:59.370 }, 00:12:59.370 { 00:12:59.370 "name": "BaseBdev3", 00:12:59.370 "uuid": "76015ed7-c69c-597a-826f-b5c48cb510da", 00:12:59.370 "is_configured": true, 00:12:59.370 "data_offset": 0, 00:12:59.370 "data_size": 65536 00:12:59.370 }, 00:12:59.370 { 00:12:59.370 "name": "BaseBdev4", 00:12:59.370 "uuid": "01c0ec0d-162a-5191-a8d4-61023987f16d", 00:12:59.370 "is_configured": true, 00:12:59.370 "data_offset": 0, 00:12:59.370 "data_size": 65536 00:12:59.370 } 00:12:59.370 ] 00:12:59.370 }' 00:12:59.370 20:08:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:59.370 20:08:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.630 20:08:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:59.630 20:08:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.630 20:08:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.630 [2024-12-08 20:08:31.531403] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:59.630 [2024-12-08 20:08:31.546282] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09d70 00:12:59.630 20:08:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.630 20:08:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:59.630 [2024-12-08 20:08:31.548172] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:01.013 20:08:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:01.013 20:08:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:01.013 20:08:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:01.013 20:08:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:01.013 20:08:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:01.013 20:08:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:01.013 20:08:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.013 20:08:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:01.013 20:08:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.013 20:08:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.013 20:08:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:01.013 "name": "raid_bdev1", 00:13:01.013 "uuid": "4d027e77-85ac-4c56-ac1f-36b95f221da7", 00:13:01.013 "strip_size_kb": 0, 00:13:01.013 "state": "online", 00:13:01.013 "raid_level": "raid1", 00:13:01.013 "superblock": false, 00:13:01.013 "num_base_bdevs": 4, 00:13:01.013 "num_base_bdevs_discovered": 4, 00:13:01.013 "num_base_bdevs_operational": 4, 00:13:01.013 "process": { 00:13:01.013 "type": "rebuild", 00:13:01.013 "target": "spare", 00:13:01.013 "progress": { 00:13:01.013 "blocks": 20480, 00:13:01.013 "percent": 31 00:13:01.013 } 00:13:01.013 }, 00:13:01.013 "base_bdevs_list": [ 00:13:01.013 { 00:13:01.013 "name": "spare", 00:13:01.013 "uuid": "f5711164-bdbc-5f5a-81e4-b929c03c8a62", 00:13:01.013 "is_configured": true, 00:13:01.013 "data_offset": 0, 00:13:01.013 "data_size": 65536 00:13:01.013 }, 00:13:01.013 { 00:13:01.013 "name": "BaseBdev2", 00:13:01.013 "uuid": "dbbfb223-1451-5545-ba90-7493ed73891f", 00:13:01.013 "is_configured": true, 00:13:01.013 "data_offset": 0, 00:13:01.013 "data_size": 65536 00:13:01.013 }, 00:13:01.013 { 00:13:01.013 "name": "BaseBdev3", 00:13:01.013 "uuid": "76015ed7-c69c-597a-826f-b5c48cb510da", 00:13:01.013 "is_configured": true, 00:13:01.013 "data_offset": 0, 00:13:01.013 "data_size": 65536 00:13:01.013 }, 00:13:01.013 { 00:13:01.013 "name": "BaseBdev4", 00:13:01.013 "uuid": "01c0ec0d-162a-5191-a8d4-61023987f16d", 00:13:01.013 "is_configured": true, 00:13:01.013 "data_offset": 0, 00:13:01.013 "data_size": 65536 00:13:01.013 } 00:13:01.013 ] 00:13:01.013 }' 00:13:01.013 20:08:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:01.013 20:08:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:01.013 20:08:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:01.013 20:08:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:01.013 20:08:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:01.013 20:08:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.013 20:08:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.013 [2024-12-08 20:08:32.711430] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:01.013 [2024-12-08 20:08:32.752934] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:01.013 [2024-12-08 20:08:32.753046] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:01.013 [2024-12-08 20:08:32.753064] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:01.013 [2024-12-08 20:08:32.753074] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:01.013 20:08:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.013 20:08:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:01.013 20:08:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:01.013 20:08:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:01.013 20:08:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:01.013 20:08:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:01.013 20:08:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:01.013 20:08:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:01.013 20:08:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:01.013 20:08:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:01.013 20:08:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:01.013 20:08:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:01.013 20:08:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:01.013 20:08:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.013 20:08:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.013 20:08:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.013 20:08:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:01.013 "name": "raid_bdev1", 00:13:01.013 "uuid": "4d027e77-85ac-4c56-ac1f-36b95f221da7", 00:13:01.013 "strip_size_kb": 0, 00:13:01.013 "state": "online", 00:13:01.013 "raid_level": "raid1", 00:13:01.013 "superblock": false, 00:13:01.013 "num_base_bdevs": 4, 00:13:01.013 "num_base_bdevs_discovered": 3, 00:13:01.013 "num_base_bdevs_operational": 3, 00:13:01.013 "base_bdevs_list": [ 00:13:01.013 { 00:13:01.013 "name": null, 00:13:01.013 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:01.013 "is_configured": false, 00:13:01.013 "data_offset": 0, 00:13:01.013 "data_size": 65536 00:13:01.013 }, 00:13:01.013 { 00:13:01.013 "name": "BaseBdev2", 00:13:01.013 "uuid": "dbbfb223-1451-5545-ba90-7493ed73891f", 00:13:01.013 "is_configured": true, 00:13:01.013 "data_offset": 0, 00:13:01.013 "data_size": 65536 00:13:01.013 }, 00:13:01.013 { 00:13:01.013 "name": "BaseBdev3", 00:13:01.013 "uuid": "76015ed7-c69c-597a-826f-b5c48cb510da", 00:13:01.013 "is_configured": true, 00:13:01.013 "data_offset": 0, 00:13:01.013 "data_size": 65536 00:13:01.013 }, 00:13:01.013 { 00:13:01.013 "name": "BaseBdev4", 00:13:01.013 "uuid": "01c0ec0d-162a-5191-a8d4-61023987f16d", 00:13:01.013 "is_configured": true, 00:13:01.013 "data_offset": 0, 00:13:01.013 "data_size": 65536 00:13:01.013 } 00:13:01.013 ] 00:13:01.013 }' 00:13:01.013 20:08:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:01.013 20:08:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.272 20:08:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:01.272 20:08:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:01.272 20:08:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:01.272 20:08:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:01.272 20:08:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:01.272 20:08:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:01.272 20:08:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.272 20:08:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.272 20:08:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:01.272 20:08:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.272 20:08:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:01.272 "name": "raid_bdev1", 00:13:01.272 "uuid": "4d027e77-85ac-4c56-ac1f-36b95f221da7", 00:13:01.272 "strip_size_kb": 0, 00:13:01.272 "state": "online", 00:13:01.272 "raid_level": "raid1", 00:13:01.272 "superblock": false, 00:13:01.272 "num_base_bdevs": 4, 00:13:01.272 "num_base_bdevs_discovered": 3, 00:13:01.272 "num_base_bdevs_operational": 3, 00:13:01.272 "base_bdevs_list": [ 00:13:01.272 { 00:13:01.272 "name": null, 00:13:01.272 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:01.272 "is_configured": false, 00:13:01.272 "data_offset": 0, 00:13:01.272 "data_size": 65536 00:13:01.272 }, 00:13:01.272 { 00:13:01.272 "name": "BaseBdev2", 00:13:01.272 "uuid": "dbbfb223-1451-5545-ba90-7493ed73891f", 00:13:01.272 "is_configured": true, 00:13:01.272 "data_offset": 0, 00:13:01.272 "data_size": 65536 00:13:01.272 }, 00:13:01.272 { 00:13:01.272 "name": "BaseBdev3", 00:13:01.272 "uuid": "76015ed7-c69c-597a-826f-b5c48cb510da", 00:13:01.272 "is_configured": true, 00:13:01.272 "data_offset": 0, 00:13:01.272 "data_size": 65536 00:13:01.272 }, 00:13:01.272 { 00:13:01.272 "name": "BaseBdev4", 00:13:01.272 "uuid": "01c0ec0d-162a-5191-a8d4-61023987f16d", 00:13:01.272 "is_configured": true, 00:13:01.272 "data_offset": 0, 00:13:01.272 "data_size": 65536 00:13:01.272 } 00:13:01.272 ] 00:13:01.272 }' 00:13:01.272 20:08:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:01.531 20:08:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:01.531 20:08:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:01.531 20:08:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:01.531 20:08:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:01.531 20:08:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.531 20:08:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.531 [2024-12-08 20:08:33.348988] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:01.531 [2024-12-08 20:08:33.363117] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09e40 00:13:01.531 20:08:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.531 20:08:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:01.531 [2024-12-08 20:08:33.365024] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:02.472 20:08:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:02.472 20:08:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:02.472 20:08:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:02.472 20:08:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:02.472 20:08:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:02.472 20:08:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:02.472 20:08:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:02.472 20:08:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.472 20:08:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.472 20:08:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.472 20:08:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:02.472 "name": "raid_bdev1", 00:13:02.472 "uuid": "4d027e77-85ac-4c56-ac1f-36b95f221da7", 00:13:02.472 "strip_size_kb": 0, 00:13:02.472 "state": "online", 00:13:02.472 "raid_level": "raid1", 00:13:02.472 "superblock": false, 00:13:02.472 "num_base_bdevs": 4, 00:13:02.472 "num_base_bdevs_discovered": 4, 00:13:02.472 "num_base_bdevs_operational": 4, 00:13:02.472 "process": { 00:13:02.472 "type": "rebuild", 00:13:02.472 "target": "spare", 00:13:02.472 "progress": { 00:13:02.472 "blocks": 20480, 00:13:02.472 "percent": 31 00:13:02.472 } 00:13:02.472 }, 00:13:02.472 "base_bdevs_list": [ 00:13:02.472 { 00:13:02.472 "name": "spare", 00:13:02.472 "uuid": "f5711164-bdbc-5f5a-81e4-b929c03c8a62", 00:13:02.472 "is_configured": true, 00:13:02.472 "data_offset": 0, 00:13:02.472 "data_size": 65536 00:13:02.472 }, 00:13:02.472 { 00:13:02.472 "name": "BaseBdev2", 00:13:02.472 "uuid": "dbbfb223-1451-5545-ba90-7493ed73891f", 00:13:02.472 "is_configured": true, 00:13:02.472 "data_offset": 0, 00:13:02.472 "data_size": 65536 00:13:02.472 }, 00:13:02.472 { 00:13:02.472 "name": "BaseBdev3", 00:13:02.472 "uuid": "76015ed7-c69c-597a-826f-b5c48cb510da", 00:13:02.472 "is_configured": true, 00:13:02.472 "data_offset": 0, 00:13:02.472 "data_size": 65536 00:13:02.472 }, 00:13:02.472 { 00:13:02.472 "name": "BaseBdev4", 00:13:02.472 "uuid": "01c0ec0d-162a-5191-a8d4-61023987f16d", 00:13:02.472 "is_configured": true, 00:13:02.472 "data_offset": 0, 00:13:02.472 "data_size": 65536 00:13:02.472 } 00:13:02.472 ] 00:13:02.472 }' 00:13:02.472 20:08:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:02.472 20:08:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:02.472 20:08:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:02.732 20:08:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:02.732 20:08:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:13:02.732 20:08:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:13:02.732 20:08:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:02.732 20:08:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:13:02.732 20:08:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:02.732 20:08:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.732 20:08:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.732 [2024-12-08 20:08:34.500371] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:02.733 [2024-12-08 20:08:34.569899] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d09e40 00:13:02.733 20:08:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.733 20:08:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:13:02.733 20:08:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:13:02.733 20:08:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:02.733 20:08:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:02.733 20:08:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:02.733 20:08:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:02.733 20:08:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:02.733 20:08:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:02.733 20:08:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:02.733 20:08:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.733 20:08:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.733 20:08:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.733 20:08:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:02.733 "name": "raid_bdev1", 00:13:02.733 "uuid": "4d027e77-85ac-4c56-ac1f-36b95f221da7", 00:13:02.733 "strip_size_kb": 0, 00:13:02.733 "state": "online", 00:13:02.733 "raid_level": "raid1", 00:13:02.733 "superblock": false, 00:13:02.733 "num_base_bdevs": 4, 00:13:02.733 "num_base_bdevs_discovered": 3, 00:13:02.733 "num_base_bdevs_operational": 3, 00:13:02.733 "process": { 00:13:02.733 "type": "rebuild", 00:13:02.733 "target": "spare", 00:13:02.733 "progress": { 00:13:02.733 "blocks": 24576, 00:13:02.733 "percent": 37 00:13:02.733 } 00:13:02.733 }, 00:13:02.733 "base_bdevs_list": [ 00:13:02.733 { 00:13:02.733 "name": "spare", 00:13:02.733 "uuid": "f5711164-bdbc-5f5a-81e4-b929c03c8a62", 00:13:02.733 "is_configured": true, 00:13:02.733 "data_offset": 0, 00:13:02.733 "data_size": 65536 00:13:02.733 }, 00:13:02.733 { 00:13:02.733 "name": null, 00:13:02.733 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:02.733 "is_configured": false, 00:13:02.733 "data_offset": 0, 00:13:02.733 "data_size": 65536 00:13:02.733 }, 00:13:02.733 { 00:13:02.733 "name": "BaseBdev3", 00:13:02.733 "uuid": "76015ed7-c69c-597a-826f-b5c48cb510da", 00:13:02.733 "is_configured": true, 00:13:02.733 "data_offset": 0, 00:13:02.733 "data_size": 65536 00:13:02.733 }, 00:13:02.733 { 00:13:02.733 "name": "BaseBdev4", 00:13:02.733 "uuid": "01c0ec0d-162a-5191-a8d4-61023987f16d", 00:13:02.733 "is_configured": true, 00:13:02.733 "data_offset": 0, 00:13:02.733 "data_size": 65536 00:13:02.733 } 00:13:02.733 ] 00:13:02.733 }' 00:13:02.733 20:08:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:02.733 20:08:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:02.733 20:08:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:02.733 20:08:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:02.733 20:08:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=436 00:13:02.733 20:08:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:02.733 20:08:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:02.733 20:08:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:02.733 20:08:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:02.733 20:08:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:02.733 20:08:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:02.993 20:08:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:02.993 20:08:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.993 20:08:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.993 20:08:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:02.993 20:08:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.993 20:08:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:02.993 "name": "raid_bdev1", 00:13:02.993 "uuid": "4d027e77-85ac-4c56-ac1f-36b95f221da7", 00:13:02.993 "strip_size_kb": 0, 00:13:02.993 "state": "online", 00:13:02.993 "raid_level": "raid1", 00:13:02.993 "superblock": false, 00:13:02.993 "num_base_bdevs": 4, 00:13:02.993 "num_base_bdevs_discovered": 3, 00:13:02.993 "num_base_bdevs_operational": 3, 00:13:02.993 "process": { 00:13:02.993 "type": "rebuild", 00:13:02.993 "target": "spare", 00:13:02.993 "progress": { 00:13:02.993 "blocks": 26624, 00:13:02.993 "percent": 40 00:13:02.993 } 00:13:02.993 }, 00:13:02.993 "base_bdevs_list": [ 00:13:02.993 { 00:13:02.993 "name": "spare", 00:13:02.993 "uuid": "f5711164-bdbc-5f5a-81e4-b929c03c8a62", 00:13:02.993 "is_configured": true, 00:13:02.993 "data_offset": 0, 00:13:02.993 "data_size": 65536 00:13:02.993 }, 00:13:02.993 { 00:13:02.993 "name": null, 00:13:02.993 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:02.993 "is_configured": false, 00:13:02.993 "data_offset": 0, 00:13:02.993 "data_size": 65536 00:13:02.993 }, 00:13:02.993 { 00:13:02.993 "name": "BaseBdev3", 00:13:02.993 "uuid": "76015ed7-c69c-597a-826f-b5c48cb510da", 00:13:02.993 "is_configured": true, 00:13:02.993 "data_offset": 0, 00:13:02.993 "data_size": 65536 00:13:02.993 }, 00:13:02.993 { 00:13:02.993 "name": "BaseBdev4", 00:13:02.993 "uuid": "01c0ec0d-162a-5191-a8d4-61023987f16d", 00:13:02.993 "is_configured": true, 00:13:02.993 "data_offset": 0, 00:13:02.993 "data_size": 65536 00:13:02.993 } 00:13:02.993 ] 00:13:02.993 }' 00:13:02.993 20:08:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:02.993 20:08:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:02.993 20:08:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:02.993 20:08:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:02.993 20:08:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:03.931 20:08:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:03.931 20:08:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:03.931 20:08:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:03.931 20:08:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:03.931 20:08:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:03.931 20:08:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:03.931 20:08:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:03.931 20:08:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:03.931 20:08:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.931 20:08:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.931 20:08:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.931 20:08:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:03.931 "name": "raid_bdev1", 00:13:03.931 "uuid": "4d027e77-85ac-4c56-ac1f-36b95f221da7", 00:13:03.931 "strip_size_kb": 0, 00:13:03.931 "state": "online", 00:13:03.931 "raid_level": "raid1", 00:13:03.931 "superblock": false, 00:13:03.931 "num_base_bdevs": 4, 00:13:03.931 "num_base_bdevs_discovered": 3, 00:13:03.931 "num_base_bdevs_operational": 3, 00:13:03.931 "process": { 00:13:03.931 "type": "rebuild", 00:13:03.931 "target": "spare", 00:13:03.931 "progress": { 00:13:03.931 "blocks": 49152, 00:13:03.931 "percent": 75 00:13:03.931 } 00:13:03.931 }, 00:13:03.931 "base_bdevs_list": [ 00:13:03.931 { 00:13:03.931 "name": "spare", 00:13:03.931 "uuid": "f5711164-bdbc-5f5a-81e4-b929c03c8a62", 00:13:03.931 "is_configured": true, 00:13:03.931 "data_offset": 0, 00:13:03.931 "data_size": 65536 00:13:03.931 }, 00:13:03.931 { 00:13:03.931 "name": null, 00:13:03.931 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:03.931 "is_configured": false, 00:13:03.931 "data_offset": 0, 00:13:03.931 "data_size": 65536 00:13:03.931 }, 00:13:03.931 { 00:13:03.931 "name": "BaseBdev3", 00:13:03.931 "uuid": "76015ed7-c69c-597a-826f-b5c48cb510da", 00:13:03.931 "is_configured": true, 00:13:03.931 "data_offset": 0, 00:13:03.931 "data_size": 65536 00:13:03.931 }, 00:13:03.931 { 00:13:03.931 "name": "BaseBdev4", 00:13:03.931 "uuid": "01c0ec0d-162a-5191-a8d4-61023987f16d", 00:13:03.931 "is_configured": true, 00:13:03.931 "data_offset": 0, 00:13:03.931 "data_size": 65536 00:13:03.931 } 00:13:03.931 ] 00:13:03.931 }' 00:13:04.191 20:08:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:04.191 20:08:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:04.191 20:08:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:04.191 20:08:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:04.191 20:08:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:04.759 [2024-12-08 20:08:36.577868] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:04.759 [2024-12-08 20:08:36.578016] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:04.759 [2024-12-08 20:08:36.578086] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:05.327 20:08:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:05.327 20:08:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:05.327 20:08:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:05.327 20:08:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:05.327 20:08:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:05.327 20:08:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:05.327 20:08:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:05.327 20:08:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:05.327 20:08:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.327 20:08:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.327 20:08:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.327 20:08:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:05.327 "name": "raid_bdev1", 00:13:05.327 "uuid": "4d027e77-85ac-4c56-ac1f-36b95f221da7", 00:13:05.327 "strip_size_kb": 0, 00:13:05.327 "state": "online", 00:13:05.327 "raid_level": "raid1", 00:13:05.327 "superblock": false, 00:13:05.327 "num_base_bdevs": 4, 00:13:05.327 "num_base_bdevs_discovered": 3, 00:13:05.327 "num_base_bdevs_operational": 3, 00:13:05.327 "base_bdevs_list": [ 00:13:05.327 { 00:13:05.327 "name": "spare", 00:13:05.327 "uuid": "f5711164-bdbc-5f5a-81e4-b929c03c8a62", 00:13:05.327 "is_configured": true, 00:13:05.327 "data_offset": 0, 00:13:05.327 "data_size": 65536 00:13:05.327 }, 00:13:05.327 { 00:13:05.327 "name": null, 00:13:05.327 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:05.327 "is_configured": false, 00:13:05.327 "data_offset": 0, 00:13:05.327 "data_size": 65536 00:13:05.327 }, 00:13:05.327 { 00:13:05.327 "name": "BaseBdev3", 00:13:05.327 "uuid": "76015ed7-c69c-597a-826f-b5c48cb510da", 00:13:05.327 "is_configured": true, 00:13:05.327 "data_offset": 0, 00:13:05.327 "data_size": 65536 00:13:05.327 }, 00:13:05.327 { 00:13:05.327 "name": "BaseBdev4", 00:13:05.327 "uuid": "01c0ec0d-162a-5191-a8d4-61023987f16d", 00:13:05.327 "is_configured": true, 00:13:05.327 "data_offset": 0, 00:13:05.327 "data_size": 65536 00:13:05.327 } 00:13:05.327 ] 00:13:05.327 }' 00:13:05.327 20:08:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:05.327 20:08:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:05.327 20:08:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:05.327 20:08:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:05.327 20:08:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:13:05.327 20:08:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:05.327 20:08:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:05.327 20:08:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:05.327 20:08:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:05.327 20:08:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:05.327 20:08:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:05.327 20:08:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.327 20:08:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:05.327 20:08:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.327 20:08:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.327 20:08:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:05.327 "name": "raid_bdev1", 00:13:05.327 "uuid": "4d027e77-85ac-4c56-ac1f-36b95f221da7", 00:13:05.327 "strip_size_kb": 0, 00:13:05.327 "state": "online", 00:13:05.327 "raid_level": "raid1", 00:13:05.327 "superblock": false, 00:13:05.327 "num_base_bdevs": 4, 00:13:05.327 "num_base_bdevs_discovered": 3, 00:13:05.327 "num_base_bdevs_operational": 3, 00:13:05.327 "base_bdevs_list": [ 00:13:05.327 { 00:13:05.327 "name": "spare", 00:13:05.327 "uuid": "f5711164-bdbc-5f5a-81e4-b929c03c8a62", 00:13:05.327 "is_configured": true, 00:13:05.327 "data_offset": 0, 00:13:05.327 "data_size": 65536 00:13:05.327 }, 00:13:05.327 { 00:13:05.327 "name": null, 00:13:05.327 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:05.327 "is_configured": false, 00:13:05.327 "data_offset": 0, 00:13:05.327 "data_size": 65536 00:13:05.327 }, 00:13:05.327 { 00:13:05.328 "name": "BaseBdev3", 00:13:05.328 "uuid": "76015ed7-c69c-597a-826f-b5c48cb510da", 00:13:05.328 "is_configured": true, 00:13:05.328 "data_offset": 0, 00:13:05.328 "data_size": 65536 00:13:05.328 }, 00:13:05.328 { 00:13:05.328 "name": "BaseBdev4", 00:13:05.328 "uuid": "01c0ec0d-162a-5191-a8d4-61023987f16d", 00:13:05.328 "is_configured": true, 00:13:05.328 "data_offset": 0, 00:13:05.328 "data_size": 65536 00:13:05.328 } 00:13:05.328 ] 00:13:05.328 }' 00:13:05.328 20:08:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:05.328 20:08:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:05.328 20:08:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:05.328 20:08:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:05.328 20:08:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:05.328 20:08:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:05.328 20:08:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:05.328 20:08:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:05.328 20:08:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:05.328 20:08:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:05.328 20:08:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:05.328 20:08:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:05.328 20:08:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:05.328 20:08:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:05.328 20:08:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:05.328 20:08:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.328 20:08:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.328 20:08:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:05.587 20:08:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.587 20:08:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:05.587 "name": "raid_bdev1", 00:13:05.587 "uuid": "4d027e77-85ac-4c56-ac1f-36b95f221da7", 00:13:05.587 "strip_size_kb": 0, 00:13:05.587 "state": "online", 00:13:05.587 "raid_level": "raid1", 00:13:05.587 "superblock": false, 00:13:05.587 "num_base_bdevs": 4, 00:13:05.587 "num_base_bdevs_discovered": 3, 00:13:05.587 "num_base_bdevs_operational": 3, 00:13:05.587 "base_bdevs_list": [ 00:13:05.587 { 00:13:05.587 "name": "spare", 00:13:05.587 "uuid": "f5711164-bdbc-5f5a-81e4-b929c03c8a62", 00:13:05.587 "is_configured": true, 00:13:05.587 "data_offset": 0, 00:13:05.587 "data_size": 65536 00:13:05.587 }, 00:13:05.587 { 00:13:05.587 "name": null, 00:13:05.587 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:05.587 "is_configured": false, 00:13:05.587 "data_offset": 0, 00:13:05.587 "data_size": 65536 00:13:05.587 }, 00:13:05.587 { 00:13:05.587 "name": "BaseBdev3", 00:13:05.587 "uuid": "76015ed7-c69c-597a-826f-b5c48cb510da", 00:13:05.587 "is_configured": true, 00:13:05.587 "data_offset": 0, 00:13:05.587 "data_size": 65536 00:13:05.587 }, 00:13:05.587 { 00:13:05.587 "name": "BaseBdev4", 00:13:05.587 "uuid": "01c0ec0d-162a-5191-a8d4-61023987f16d", 00:13:05.587 "is_configured": true, 00:13:05.587 "data_offset": 0, 00:13:05.587 "data_size": 65536 00:13:05.587 } 00:13:05.587 ] 00:13:05.587 }' 00:13:05.587 20:08:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:05.587 20:08:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.847 20:08:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:05.847 20:08:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.847 20:08:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.847 [2024-12-08 20:08:37.713791] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:05.847 [2024-12-08 20:08:37.713869] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:05.847 [2024-12-08 20:08:37.713970] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:05.847 [2024-12-08 20:08:37.714065] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:05.847 [2024-12-08 20:08:37.714076] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:05.847 20:08:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.847 20:08:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:13:05.847 20:08:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:05.847 20:08:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.847 20:08:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.847 20:08:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.847 20:08:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:05.847 20:08:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:05.847 20:08:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:13:05.847 20:08:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:13:05.847 20:08:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:05.847 20:08:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:13:05.847 20:08:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:05.847 20:08:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:05.847 20:08:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:05.847 20:08:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:13:05.847 20:08:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:05.847 20:08:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:05.847 20:08:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:13:06.106 /dev/nbd0 00:13:06.106 20:08:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:06.106 20:08:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:06.106 20:08:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:06.106 20:08:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:13:06.106 20:08:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:06.106 20:08:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:06.106 20:08:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:06.106 20:08:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:13:06.106 20:08:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:06.106 20:08:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:06.106 20:08:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:06.106 1+0 records in 00:13:06.106 1+0 records out 00:13:06.106 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000500186 s, 8.2 MB/s 00:13:06.106 20:08:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:06.106 20:08:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:13:06.106 20:08:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:06.106 20:08:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:06.106 20:08:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:13:06.106 20:08:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:06.106 20:08:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:06.106 20:08:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:13:06.366 /dev/nbd1 00:13:06.366 20:08:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:06.366 20:08:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:06.366 20:08:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:06.366 20:08:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:13:06.366 20:08:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:06.366 20:08:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:06.366 20:08:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:06.366 20:08:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:13:06.366 20:08:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:06.366 20:08:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:06.366 20:08:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:06.366 1+0 records in 00:13:06.366 1+0 records out 00:13:06.366 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000403242 s, 10.2 MB/s 00:13:06.366 20:08:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:06.366 20:08:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:13:06.366 20:08:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:06.366 20:08:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:06.366 20:08:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:13:06.366 20:08:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:06.366 20:08:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:06.366 20:08:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:13:06.626 20:08:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:13:06.626 20:08:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:06.626 20:08:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:06.626 20:08:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:06.626 20:08:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:13:06.626 20:08:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:06.626 20:08:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:06.885 20:08:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:06.885 20:08:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:06.885 20:08:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:06.885 20:08:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:06.885 20:08:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:06.885 20:08:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:06.885 20:08:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:06.885 20:08:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:06.885 20:08:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:06.885 20:08:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:06.885 20:08:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:06.885 20:08:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:06.885 20:08:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:06.885 20:08:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:06.886 20:08:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:06.886 20:08:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:06.886 20:08:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:06.886 20:08:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:06.886 20:08:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:13:06.886 20:08:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 77275 00:13:06.886 20:08:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 77275 ']' 00:13:06.886 20:08:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 77275 00:13:06.886 20:08:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:13:06.886 20:08:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:06.886 20:08:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77275 00:13:07.145 killing process with pid 77275 00:13:07.145 Received shutdown signal, test time was about 60.000000 seconds 00:13:07.145 00:13:07.145 Latency(us) 00:13:07.145 [2024-12-08T20:08:39.123Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:07.145 [2024-12-08T20:08:39.123Z] =================================================================================================================== 00:13:07.145 [2024-12-08T20:08:39.123Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:07.145 20:08:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:07.145 20:08:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:07.145 20:08:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77275' 00:13:07.145 20:08:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@973 -- # kill 77275 00:13:07.145 [2024-12-08 20:08:38.883349] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:07.145 20:08:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@978 -- # wait 77275 00:13:07.405 [2024-12-08 20:08:39.338780] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:08.782 20:08:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:13:08.782 00:13:08.782 real 0m16.741s 00:13:08.782 user 0m18.979s 00:13:08.782 sys 0m2.957s 00:13:08.782 20:08:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:08.782 ************************************ 00:13:08.782 END TEST raid_rebuild_test 00:13:08.782 ************************************ 00:13:08.782 20:08:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.782 20:08:40 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false true 00:13:08.782 20:08:40 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:13:08.782 20:08:40 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:08.782 20:08:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:08.782 ************************************ 00:13:08.782 START TEST raid_rebuild_test_sb 00:13:08.783 ************************************ 00:13:08.783 20:08:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 true false true 00:13:08.783 20:08:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:08.783 20:08:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:13:08.783 20:08:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:13:08.783 20:08:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:13:08.783 20:08:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:08.783 20:08:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:08.783 20:08:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:08.783 20:08:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:08.783 20:08:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:08.783 20:08:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:08.783 20:08:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:08.783 20:08:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:08.783 20:08:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:08.783 20:08:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:13:08.783 20:08:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:08.783 20:08:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:08.783 20:08:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:13:08.783 20:08:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:08.783 20:08:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:08.783 20:08:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:08.783 20:08:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:08.783 20:08:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:08.783 20:08:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:08.783 20:08:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:08.783 20:08:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:08.783 20:08:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:08.783 20:08:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:08.783 20:08:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:08.783 20:08:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:13:08.783 20:08:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:13:08.783 20:08:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=77710 00:13:08.783 20:08:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:08.783 20:08:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 77710 00:13:08.783 20:08:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 77710 ']' 00:13:08.783 20:08:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:08.783 20:08:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:08.783 20:08:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:08.783 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:08.783 20:08:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:08.783 20:08:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:08.783 [2024-12-08 20:08:40.577223] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:13:08.783 [2024-12-08 20:08:40.577427] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:13:08.783 Zero copy mechanism will not be used. 00:13:08.783 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77710 ] 00:13:08.783 [2024-12-08 20:08:40.750020] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:09.043 [2024-12-08 20:08:40.855063] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:09.302 [2024-12-08 20:08:41.040065] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:09.302 [2024-12-08 20:08:41.040147] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:09.562 20:08:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:09.562 20:08:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:13:09.562 20:08:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:09.562 20:08:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:09.562 20:08:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.562 20:08:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:09.562 BaseBdev1_malloc 00:13:09.562 20:08:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.562 20:08:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:09.562 20:08:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.562 20:08:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:09.562 [2024-12-08 20:08:41.435704] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:09.562 [2024-12-08 20:08:41.435785] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:09.562 [2024-12-08 20:08:41.435811] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:09.562 [2024-12-08 20:08:41.435823] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:09.562 [2024-12-08 20:08:41.437827] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:09.562 [2024-12-08 20:08:41.437865] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:09.562 BaseBdev1 00:13:09.562 20:08:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.562 20:08:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:09.562 20:08:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:09.562 20:08:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.562 20:08:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:09.562 BaseBdev2_malloc 00:13:09.562 20:08:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.562 20:08:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:09.562 20:08:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.562 20:08:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:09.562 [2024-12-08 20:08:41.488887] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:09.562 [2024-12-08 20:08:41.489028] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:09.562 [2024-12-08 20:08:41.489057] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:09.562 [2024-12-08 20:08:41.489069] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:09.562 [2024-12-08 20:08:41.491060] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:09.562 [2024-12-08 20:08:41.491097] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:09.562 BaseBdev2 00:13:09.562 20:08:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.562 20:08:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:09.562 20:08:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:09.562 20:08:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.562 20:08:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:09.824 BaseBdev3_malloc 00:13:09.824 20:08:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.824 20:08:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:13:09.824 20:08:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.824 20:08:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:09.824 [2024-12-08 20:08:41.572966] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:13:09.824 [2024-12-08 20:08:41.573025] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:09.824 [2024-12-08 20:08:41.573047] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:09.824 [2024-12-08 20:08:41.573057] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:09.824 [2024-12-08 20:08:41.574979] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:09.824 [2024-12-08 20:08:41.575016] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:09.824 BaseBdev3 00:13:09.824 20:08:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.824 20:08:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:09.824 20:08:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:09.824 20:08:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.824 20:08:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:09.824 BaseBdev4_malloc 00:13:09.824 20:08:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.824 20:08:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:13:09.824 20:08:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.824 20:08:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:09.824 [2024-12-08 20:08:41.625188] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:13:09.824 [2024-12-08 20:08:41.625266] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:09.824 [2024-12-08 20:08:41.625286] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:09.824 [2024-12-08 20:08:41.625296] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:09.824 [2024-12-08 20:08:41.627341] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:09.824 [2024-12-08 20:08:41.627412] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:09.824 BaseBdev4 00:13:09.824 20:08:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.824 20:08:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:09.824 20:08:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.824 20:08:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:09.824 spare_malloc 00:13:09.824 20:08:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.824 20:08:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:09.824 20:08:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.824 20:08:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:09.824 spare_delay 00:13:09.824 20:08:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.824 20:08:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:09.824 20:08:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.824 20:08:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:09.824 [2024-12-08 20:08:41.689838] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:09.824 [2024-12-08 20:08:41.689889] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:09.824 [2024-12-08 20:08:41.689906] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:13:09.824 [2024-12-08 20:08:41.689917] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:09.824 [2024-12-08 20:08:41.692044] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:09.824 [2024-12-08 20:08:41.692146] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:09.824 spare 00:13:09.824 20:08:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.824 20:08:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:13:09.824 20:08:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.824 20:08:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:09.824 [2024-12-08 20:08:41.701865] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:09.824 [2024-12-08 20:08:41.703664] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:09.824 [2024-12-08 20:08:41.703789] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:09.824 [2024-12-08 20:08:41.703865] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:09.824 [2024-12-08 20:08:41.704114] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:09.824 [2024-12-08 20:08:41.704132] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:09.824 [2024-12-08 20:08:41.704419] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:09.824 [2024-12-08 20:08:41.704631] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:09.824 [2024-12-08 20:08:41.704661] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:09.824 [2024-12-08 20:08:41.704816] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:09.824 20:08:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.824 20:08:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:09.824 20:08:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:09.824 20:08:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:09.824 20:08:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:09.824 20:08:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:09.824 20:08:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:09.824 20:08:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:09.824 20:08:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:09.824 20:08:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:09.824 20:08:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:09.824 20:08:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:09.824 20:08:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:09.824 20:08:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.824 20:08:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:09.824 20:08:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.824 20:08:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:09.824 "name": "raid_bdev1", 00:13:09.824 "uuid": "207c0dfd-2f2a-4e06-a231-26c343310558", 00:13:09.824 "strip_size_kb": 0, 00:13:09.824 "state": "online", 00:13:09.824 "raid_level": "raid1", 00:13:09.824 "superblock": true, 00:13:09.824 "num_base_bdevs": 4, 00:13:09.824 "num_base_bdevs_discovered": 4, 00:13:09.824 "num_base_bdevs_operational": 4, 00:13:09.824 "base_bdevs_list": [ 00:13:09.824 { 00:13:09.824 "name": "BaseBdev1", 00:13:09.824 "uuid": "a4724730-f03f-5120-94e3-4e42160c98b9", 00:13:09.824 "is_configured": true, 00:13:09.824 "data_offset": 2048, 00:13:09.824 "data_size": 63488 00:13:09.824 }, 00:13:09.824 { 00:13:09.824 "name": "BaseBdev2", 00:13:09.824 "uuid": "a17c941a-79a1-59b0-9ca8-f194c87adc52", 00:13:09.824 "is_configured": true, 00:13:09.825 "data_offset": 2048, 00:13:09.825 "data_size": 63488 00:13:09.825 }, 00:13:09.825 { 00:13:09.825 "name": "BaseBdev3", 00:13:09.825 "uuid": "ab52e580-2aff-5e0e-b90d-de975ed308b0", 00:13:09.825 "is_configured": true, 00:13:09.825 "data_offset": 2048, 00:13:09.825 "data_size": 63488 00:13:09.825 }, 00:13:09.825 { 00:13:09.825 "name": "BaseBdev4", 00:13:09.825 "uuid": "fc7f04ce-f3db-5f39-b213-f077bbcd8f14", 00:13:09.825 "is_configured": true, 00:13:09.825 "data_offset": 2048, 00:13:09.825 "data_size": 63488 00:13:09.825 } 00:13:09.825 ] 00:13:09.825 }' 00:13:09.825 20:08:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:09.825 20:08:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:10.454 20:08:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:10.454 20:08:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.454 20:08:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:10.454 20:08:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:10.454 [2024-12-08 20:08:42.197367] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:10.454 20:08:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.454 20:08:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:13:10.454 20:08:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:10.454 20:08:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.454 20:08:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:10.454 20:08:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:10.454 20:08:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.454 20:08:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:13:10.454 20:08:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:13:10.454 20:08:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:13:10.454 20:08:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:13:10.454 20:08:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:13:10.454 20:08:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:10.454 20:08:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:13:10.454 20:08:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:10.454 20:08:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:10.454 20:08:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:10.454 20:08:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:13:10.454 20:08:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:10.454 20:08:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:10.454 20:08:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:13:10.714 [2024-12-08 20:08:42.472614] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:13:10.714 /dev/nbd0 00:13:10.714 20:08:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:10.714 20:08:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:10.714 20:08:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:10.714 20:08:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:13:10.714 20:08:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:10.714 20:08:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:10.714 20:08:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:10.714 20:08:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:13:10.714 20:08:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:10.714 20:08:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:10.714 20:08:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:10.714 1+0 records in 00:13:10.714 1+0 records out 00:13:10.714 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00033161 s, 12.4 MB/s 00:13:10.714 20:08:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:10.714 20:08:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:13:10.714 20:08:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:10.714 20:08:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:10.714 20:08:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:13:10.714 20:08:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:10.714 20:08:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:10.714 20:08:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:13:10.714 20:08:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:13:10.714 20:08:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:13:16.002 63488+0 records in 00:13:16.002 63488+0 records out 00:13:16.002 32505856 bytes (33 MB, 31 MiB) copied, 4.9866 s, 6.5 MB/s 00:13:16.002 20:08:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:16.002 20:08:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:16.002 20:08:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:16.002 20:08:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:16.002 20:08:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:13:16.002 20:08:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:16.002 20:08:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:16.002 20:08:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:16.002 20:08:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:16.002 20:08:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:16.002 20:08:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:16.002 20:08:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:16.002 20:08:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:16.002 [2024-12-08 20:08:47.739553] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:16.002 20:08:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:16.002 20:08:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:16.002 20:08:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:16.002 20:08:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.002 20:08:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:16.002 [2024-12-08 20:08:47.751630] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:16.002 20:08:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.002 20:08:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:16.002 20:08:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:16.002 20:08:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:16.002 20:08:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:16.002 20:08:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:16.002 20:08:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:16.002 20:08:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:16.002 20:08:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:16.002 20:08:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:16.002 20:08:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:16.002 20:08:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:16.002 20:08:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:16.002 20:08:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.002 20:08:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:16.002 20:08:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.002 20:08:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:16.002 "name": "raid_bdev1", 00:13:16.002 "uuid": "207c0dfd-2f2a-4e06-a231-26c343310558", 00:13:16.002 "strip_size_kb": 0, 00:13:16.002 "state": "online", 00:13:16.002 "raid_level": "raid1", 00:13:16.002 "superblock": true, 00:13:16.002 "num_base_bdevs": 4, 00:13:16.002 "num_base_bdevs_discovered": 3, 00:13:16.002 "num_base_bdevs_operational": 3, 00:13:16.002 "base_bdevs_list": [ 00:13:16.002 { 00:13:16.002 "name": null, 00:13:16.002 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:16.002 "is_configured": false, 00:13:16.002 "data_offset": 0, 00:13:16.002 "data_size": 63488 00:13:16.002 }, 00:13:16.002 { 00:13:16.002 "name": "BaseBdev2", 00:13:16.002 "uuid": "a17c941a-79a1-59b0-9ca8-f194c87adc52", 00:13:16.002 "is_configured": true, 00:13:16.002 "data_offset": 2048, 00:13:16.002 "data_size": 63488 00:13:16.002 }, 00:13:16.002 { 00:13:16.002 "name": "BaseBdev3", 00:13:16.002 "uuid": "ab52e580-2aff-5e0e-b90d-de975ed308b0", 00:13:16.002 "is_configured": true, 00:13:16.002 "data_offset": 2048, 00:13:16.002 "data_size": 63488 00:13:16.002 }, 00:13:16.002 { 00:13:16.002 "name": "BaseBdev4", 00:13:16.002 "uuid": "fc7f04ce-f3db-5f39-b213-f077bbcd8f14", 00:13:16.002 "is_configured": true, 00:13:16.002 "data_offset": 2048, 00:13:16.002 "data_size": 63488 00:13:16.002 } 00:13:16.002 ] 00:13:16.002 }' 00:13:16.002 20:08:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:16.002 20:08:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:16.571 20:08:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:16.571 20:08:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.571 20:08:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:16.571 [2024-12-08 20:08:48.250781] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:16.571 [2024-12-08 20:08:48.265556] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3500 00:13:16.571 20:08:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.571 20:08:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:16.571 [2024-12-08 20:08:48.267379] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:17.512 20:08:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:17.512 20:08:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:17.512 20:08:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:17.512 20:08:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:17.512 20:08:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:17.512 20:08:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:17.512 20:08:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:17.512 20:08:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.512 20:08:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.512 20:08:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.512 20:08:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:17.512 "name": "raid_bdev1", 00:13:17.512 "uuid": "207c0dfd-2f2a-4e06-a231-26c343310558", 00:13:17.512 "strip_size_kb": 0, 00:13:17.512 "state": "online", 00:13:17.512 "raid_level": "raid1", 00:13:17.512 "superblock": true, 00:13:17.512 "num_base_bdevs": 4, 00:13:17.512 "num_base_bdevs_discovered": 4, 00:13:17.512 "num_base_bdevs_operational": 4, 00:13:17.512 "process": { 00:13:17.512 "type": "rebuild", 00:13:17.512 "target": "spare", 00:13:17.512 "progress": { 00:13:17.512 "blocks": 20480, 00:13:17.512 "percent": 32 00:13:17.512 } 00:13:17.512 }, 00:13:17.512 "base_bdevs_list": [ 00:13:17.512 { 00:13:17.512 "name": "spare", 00:13:17.512 "uuid": "c56741ed-2969-5139-b1ae-5d921c8a6366", 00:13:17.512 "is_configured": true, 00:13:17.512 "data_offset": 2048, 00:13:17.512 "data_size": 63488 00:13:17.512 }, 00:13:17.512 { 00:13:17.512 "name": "BaseBdev2", 00:13:17.512 "uuid": "a17c941a-79a1-59b0-9ca8-f194c87adc52", 00:13:17.512 "is_configured": true, 00:13:17.512 "data_offset": 2048, 00:13:17.512 "data_size": 63488 00:13:17.512 }, 00:13:17.512 { 00:13:17.512 "name": "BaseBdev3", 00:13:17.512 "uuid": "ab52e580-2aff-5e0e-b90d-de975ed308b0", 00:13:17.512 "is_configured": true, 00:13:17.512 "data_offset": 2048, 00:13:17.512 "data_size": 63488 00:13:17.512 }, 00:13:17.512 { 00:13:17.512 "name": "BaseBdev4", 00:13:17.512 "uuid": "fc7f04ce-f3db-5f39-b213-f077bbcd8f14", 00:13:17.512 "is_configured": true, 00:13:17.512 "data_offset": 2048, 00:13:17.512 "data_size": 63488 00:13:17.512 } 00:13:17.512 ] 00:13:17.512 }' 00:13:17.512 20:08:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:17.512 20:08:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:17.512 20:08:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:17.512 20:08:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:17.512 20:08:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:17.512 20:08:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.512 20:08:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.512 [2024-12-08 20:08:49.426791] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:17.512 [2024-12-08 20:08:49.472448] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:17.512 [2024-12-08 20:08:49.472575] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:17.512 [2024-12-08 20:08:49.472613] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:17.512 [2024-12-08 20:08:49.472638] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:17.772 20:08:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.772 20:08:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:17.772 20:08:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:17.772 20:08:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:17.772 20:08:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:17.772 20:08:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:17.772 20:08:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:17.772 20:08:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:17.772 20:08:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:17.772 20:08:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:17.772 20:08:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:17.772 20:08:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:17.772 20:08:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:17.773 20:08:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.773 20:08:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.773 20:08:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.773 20:08:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:17.773 "name": "raid_bdev1", 00:13:17.773 "uuid": "207c0dfd-2f2a-4e06-a231-26c343310558", 00:13:17.773 "strip_size_kb": 0, 00:13:17.773 "state": "online", 00:13:17.773 "raid_level": "raid1", 00:13:17.773 "superblock": true, 00:13:17.773 "num_base_bdevs": 4, 00:13:17.773 "num_base_bdevs_discovered": 3, 00:13:17.773 "num_base_bdevs_operational": 3, 00:13:17.773 "base_bdevs_list": [ 00:13:17.773 { 00:13:17.773 "name": null, 00:13:17.773 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:17.773 "is_configured": false, 00:13:17.773 "data_offset": 0, 00:13:17.773 "data_size": 63488 00:13:17.773 }, 00:13:17.773 { 00:13:17.773 "name": "BaseBdev2", 00:13:17.773 "uuid": "a17c941a-79a1-59b0-9ca8-f194c87adc52", 00:13:17.773 "is_configured": true, 00:13:17.773 "data_offset": 2048, 00:13:17.773 "data_size": 63488 00:13:17.773 }, 00:13:17.773 { 00:13:17.773 "name": "BaseBdev3", 00:13:17.773 "uuid": "ab52e580-2aff-5e0e-b90d-de975ed308b0", 00:13:17.773 "is_configured": true, 00:13:17.773 "data_offset": 2048, 00:13:17.773 "data_size": 63488 00:13:17.773 }, 00:13:17.773 { 00:13:17.773 "name": "BaseBdev4", 00:13:17.773 "uuid": "fc7f04ce-f3db-5f39-b213-f077bbcd8f14", 00:13:17.773 "is_configured": true, 00:13:17.773 "data_offset": 2048, 00:13:17.773 "data_size": 63488 00:13:17.773 } 00:13:17.773 ] 00:13:17.773 }' 00:13:17.773 20:08:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:17.773 20:08:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:18.033 20:08:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:18.033 20:08:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:18.033 20:08:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:18.033 20:08:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:18.033 20:08:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:18.033 20:08:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:18.033 20:08:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.033 20:08:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:18.033 20:08:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:18.033 20:08:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.033 20:08:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:18.033 "name": "raid_bdev1", 00:13:18.033 "uuid": "207c0dfd-2f2a-4e06-a231-26c343310558", 00:13:18.033 "strip_size_kb": 0, 00:13:18.033 "state": "online", 00:13:18.033 "raid_level": "raid1", 00:13:18.033 "superblock": true, 00:13:18.033 "num_base_bdevs": 4, 00:13:18.033 "num_base_bdevs_discovered": 3, 00:13:18.033 "num_base_bdevs_operational": 3, 00:13:18.033 "base_bdevs_list": [ 00:13:18.033 { 00:13:18.033 "name": null, 00:13:18.033 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:18.033 "is_configured": false, 00:13:18.033 "data_offset": 0, 00:13:18.033 "data_size": 63488 00:13:18.033 }, 00:13:18.033 { 00:13:18.033 "name": "BaseBdev2", 00:13:18.033 "uuid": "a17c941a-79a1-59b0-9ca8-f194c87adc52", 00:13:18.033 "is_configured": true, 00:13:18.033 "data_offset": 2048, 00:13:18.033 "data_size": 63488 00:13:18.033 }, 00:13:18.033 { 00:13:18.033 "name": "BaseBdev3", 00:13:18.033 "uuid": "ab52e580-2aff-5e0e-b90d-de975ed308b0", 00:13:18.033 "is_configured": true, 00:13:18.033 "data_offset": 2048, 00:13:18.033 "data_size": 63488 00:13:18.033 }, 00:13:18.034 { 00:13:18.034 "name": "BaseBdev4", 00:13:18.034 "uuid": "fc7f04ce-f3db-5f39-b213-f077bbcd8f14", 00:13:18.034 "is_configured": true, 00:13:18.034 "data_offset": 2048, 00:13:18.034 "data_size": 63488 00:13:18.034 } 00:13:18.034 ] 00:13:18.034 }' 00:13:18.034 20:08:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:18.294 20:08:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:18.294 20:08:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:18.294 20:08:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:18.294 20:08:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:18.294 20:08:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.294 20:08:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:18.294 [2024-12-08 20:08:50.057674] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:18.294 [2024-12-08 20:08:50.072152] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca35d0 00:13:18.294 20:08:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.294 20:08:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:18.294 [2024-12-08 20:08:50.074052] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:19.234 20:08:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:19.234 20:08:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:19.234 20:08:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:19.234 20:08:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:19.234 20:08:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:19.234 20:08:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:19.234 20:08:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:19.234 20:08:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.234 20:08:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:19.234 20:08:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.234 20:08:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:19.234 "name": "raid_bdev1", 00:13:19.234 "uuid": "207c0dfd-2f2a-4e06-a231-26c343310558", 00:13:19.234 "strip_size_kb": 0, 00:13:19.234 "state": "online", 00:13:19.234 "raid_level": "raid1", 00:13:19.234 "superblock": true, 00:13:19.234 "num_base_bdevs": 4, 00:13:19.234 "num_base_bdevs_discovered": 4, 00:13:19.234 "num_base_bdevs_operational": 4, 00:13:19.234 "process": { 00:13:19.234 "type": "rebuild", 00:13:19.234 "target": "spare", 00:13:19.234 "progress": { 00:13:19.234 "blocks": 20480, 00:13:19.234 "percent": 32 00:13:19.234 } 00:13:19.234 }, 00:13:19.234 "base_bdevs_list": [ 00:13:19.234 { 00:13:19.234 "name": "spare", 00:13:19.234 "uuid": "c56741ed-2969-5139-b1ae-5d921c8a6366", 00:13:19.234 "is_configured": true, 00:13:19.234 "data_offset": 2048, 00:13:19.234 "data_size": 63488 00:13:19.234 }, 00:13:19.234 { 00:13:19.234 "name": "BaseBdev2", 00:13:19.234 "uuid": "a17c941a-79a1-59b0-9ca8-f194c87adc52", 00:13:19.234 "is_configured": true, 00:13:19.234 "data_offset": 2048, 00:13:19.234 "data_size": 63488 00:13:19.234 }, 00:13:19.234 { 00:13:19.234 "name": "BaseBdev3", 00:13:19.234 "uuid": "ab52e580-2aff-5e0e-b90d-de975ed308b0", 00:13:19.234 "is_configured": true, 00:13:19.234 "data_offset": 2048, 00:13:19.234 "data_size": 63488 00:13:19.234 }, 00:13:19.234 { 00:13:19.234 "name": "BaseBdev4", 00:13:19.234 "uuid": "fc7f04ce-f3db-5f39-b213-f077bbcd8f14", 00:13:19.234 "is_configured": true, 00:13:19.234 "data_offset": 2048, 00:13:19.234 "data_size": 63488 00:13:19.234 } 00:13:19.234 ] 00:13:19.234 }' 00:13:19.234 20:08:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:19.234 20:08:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:19.234 20:08:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:19.234 20:08:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:19.495 20:08:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:13:19.495 20:08:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:13:19.495 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:13:19.495 20:08:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:13:19.495 20:08:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:19.495 20:08:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:13:19.495 20:08:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:19.495 20:08:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.495 20:08:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:19.495 [2024-12-08 20:08:51.217123] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:19.495 [2024-12-08 20:08:51.379268] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000ca35d0 00:13:19.495 20:08:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.495 20:08:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:13:19.495 20:08:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:13:19.495 20:08:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:19.495 20:08:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:19.495 20:08:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:19.495 20:08:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:19.495 20:08:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:19.495 20:08:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:19.495 20:08:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:19.495 20:08:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.495 20:08:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:19.495 20:08:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.495 20:08:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:19.495 "name": "raid_bdev1", 00:13:19.495 "uuid": "207c0dfd-2f2a-4e06-a231-26c343310558", 00:13:19.495 "strip_size_kb": 0, 00:13:19.495 "state": "online", 00:13:19.495 "raid_level": "raid1", 00:13:19.495 "superblock": true, 00:13:19.495 "num_base_bdevs": 4, 00:13:19.495 "num_base_bdevs_discovered": 3, 00:13:19.495 "num_base_bdevs_operational": 3, 00:13:19.495 "process": { 00:13:19.495 "type": "rebuild", 00:13:19.495 "target": "spare", 00:13:19.495 "progress": { 00:13:19.495 "blocks": 24576, 00:13:19.495 "percent": 38 00:13:19.495 } 00:13:19.495 }, 00:13:19.495 "base_bdevs_list": [ 00:13:19.495 { 00:13:19.495 "name": "spare", 00:13:19.495 "uuid": "c56741ed-2969-5139-b1ae-5d921c8a6366", 00:13:19.495 "is_configured": true, 00:13:19.495 "data_offset": 2048, 00:13:19.495 "data_size": 63488 00:13:19.495 }, 00:13:19.495 { 00:13:19.495 "name": null, 00:13:19.495 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:19.495 "is_configured": false, 00:13:19.495 "data_offset": 0, 00:13:19.495 "data_size": 63488 00:13:19.495 }, 00:13:19.495 { 00:13:19.495 "name": "BaseBdev3", 00:13:19.495 "uuid": "ab52e580-2aff-5e0e-b90d-de975ed308b0", 00:13:19.495 "is_configured": true, 00:13:19.495 "data_offset": 2048, 00:13:19.495 "data_size": 63488 00:13:19.495 }, 00:13:19.495 { 00:13:19.495 "name": "BaseBdev4", 00:13:19.495 "uuid": "fc7f04ce-f3db-5f39-b213-f077bbcd8f14", 00:13:19.495 "is_configured": true, 00:13:19.495 "data_offset": 2048, 00:13:19.495 "data_size": 63488 00:13:19.495 } 00:13:19.495 ] 00:13:19.495 }' 00:13:19.495 20:08:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:19.755 20:08:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:19.755 20:08:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:19.755 20:08:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:19.755 20:08:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=453 00:13:19.755 20:08:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:19.755 20:08:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:19.755 20:08:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:19.755 20:08:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:19.755 20:08:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:19.755 20:08:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:19.755 20:08:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:19.755 20:08:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.755 20:08:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:19.755 20:08:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:19.755 20:08:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.755 20:08:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:19.755 "name": "raid_bdev1", 00:13:19.755 "uuid": "207c0dfd-2f2a-4e06-a231-26c343310558", 00:13:19.755 "strip_size_kb": 0, 00:13:19.755 "state": "online", 00:13:19.756 "raid_level": "raid1", 00:13:19.756 "superblock": true, 00:13:19.756 "num_base_bdevs": 4, 00:13:19.756 "num_base_bdevs_discovered": 3, 00:13:19.756 "num_base_bdevs_operational": 3, 00:13:19.756 "process": { 00:13:19.756 "type": "rebuild", 00:13:19.756 "target": "spare", 00:13:19.756 "progress": { 00:13:19.756 "blocks": 26624, 00:13:19.756 "percent": 41 00:13:19.756 } 00:13:19.756 }, 00:13:19.756 "base_bdevs_list": [ 00:13:19.756 { 00:13:19.756 "name": "spare", 00:13:19.756 "uuid": "c56741ed-2969-5139-b1ae-5d921c8a6366", 00:13:19.756 "is_configured": true, 00:13:19.756 "data_offset": 2048, 00:13:19.756 "data_size": 63488 00:13:19.756 }, 00:13:19.756 { 00:13:19.756 "name": null, 00:13:19.756 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:19.756 "is_configured": false, 00:13:19.756 "data_offset": 0, 00:13:19.756 "data_size": 63488 00:13:19.756 }, 00:13:19.756 { 00:13:19.756 "name": "BaseBdev3", 00:13:19.756 "uuid": "ab52e580-2aff-5e0e-b90d-de975ed308b0", 00:13:19.756 "is_configured": true, 00:13:19.756 "data_offset": 2048, 00:13:19.756 "data_size": 63488 00:13:19.756 }, 00:13:19.756 { 00:13:19.756 "name": "BaseBdev4", 00:13:19.756 "uuid": "fc7f04ce-f3db-5f39-b213-f077bbcd8f14", 00:13:19.756 "is_configured": true, 00:13:19.756 "data_offset": 2048, 00:13:19.756 "data_size": 63488 00:13:19.756 } 00:13:19.756 ] 00:13:19.756 }' 00:13:19.756 20:08:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:19.756 20:08:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:19.756 20:08:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:19.756 20:08:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:19.756 20:08:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:20.696 20:08:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:20.696 20:08:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:20.696 20:08:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:20.696 20:08:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:20.696 20:08:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:20.696 20:08:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:20.696 20:08:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:20.696 20:08:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:20.696 20:08:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.696 20:08:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:20.956 20:08:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.956 20:08:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:20.956 "name": "raid_bdev1", 00:13:20.956 "uuid": "207c0dfd-2f2a-4e06-a231-26c343310558", 00:13:20.956 "strip_size_kb": 0, 00:13:20.956 "state": "online", 00:13:20.956 "raid_level": "raid1", 00:13:20.956 "superblock": true, 00:13:20.956 "num_base_bdevs": 4, 00:13:20.956 "num_base_bdevs_discovered": 3, 00:13:20.956 "num_base_bdevs_operational": 3, 00:13:20.956 "process": { 00:13:20.956 "type": "rebuild", 00:13:20.956 "target": "spare", 00:13:20.956 "progress": { 00:13:20.956 "blocks": 49152, 00:13:20.956 "percent": 77 00:13:20.956 } 00:13:20.956 }, 00:13:20.956 "base_bdevs_list": [ 00:13:20.956 { 00:13:20.956 "name": "spare", 00:13:20.956 "uuid": "c56741ed-2969-5139-b1ae-5d921c8a6366", 00:13:20.956 "is_configured": true, 00:13:20.956 "data_offset": 2048, 00:13:20.956 "data_size": 63488 00:13:20.956 }, 00:13:20.956 { 00:13:20.956 "name": null, 00:13:20.956 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:20.956 "is_configured": false, 00:13:20.956 "data_offset": 0, 00:13:20.956 "data_size": 63488 00:13:20.956 }, 00:13:20.956 { 00:13:20.956 "name": "BaseBdev3", 00:13:20.956 "uuid": "ab52e580-2aff-5e0e-b90d-de975ed308b0", 00:13:20.956 "is_configured": true, 00:13:20.956 "data_offset": 2048, 00:13:20.956 "data_size": 63488 00:13:20.956 }, 00:13:20.956 { 00:13:20.956 "name": "BaseBdev4", 00:13:20.956 "uuid": "fc7f04ce-f3db-5f39-b213-f077bbcd8f14", 00:13:20.956 "is_configured": true, 00:13:20.956 "data_offset": 2048, 00:13:20.956 "data_size": 63488 00:13:20.956 } 00:13:20.956 ] 00:13:20.956 }' 00:13:20.956 20:08:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:20.956 20:08:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:20.956 20:08:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:20.956 20:08:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:20.956 20:08:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:21.527 [2024-12-08 20:08:53.286717] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:21.527 [2024-12-08 20:08:53.286782] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:21.527 [2024-12-08 20:08:53.286904] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:22.097 20:08:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:22.097 20:08:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:22.097 20:08:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:22.097 20:08:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:22.097 20:08:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:22.097 20:08:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:22.097 20:08:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:22.097 20:08:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:22.097 20:08:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.097 20:08:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.097 20:08:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.097 20:08:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:22.097 "name": "raid_bdev1", 00:13:22.097 "uuid": "207c0dfd-2f2a-4e06-a231-26c343310558", 00:13:22.097 "strip_size_kb": 0, 00:13:22.097 "state": "online", 00:13:22.097 "raid_level": "raid1", 00:13:22.097 "superblock": true, 00:13:22.097 "num_base_bdevs": 4, 00:13:22.097 "num_base_bdevs_discovered": 3, 00:13:22.097 "num_base_bdevs_operational": 3, 00:13:22.097 "base_bdevs_list": [ 00:13:22.097 { 00:13:22.097 "name": "spare", 00:13:22.097 "uuid": "c56741ed-2969-5139-b1ae-5d921c8a6366", 00:13:22.097 "is_configured": true, 00:13:22.097 "data_offset": 2048, 00:13:22.097 "data_size": 63488 00:13:22.097 }, 00:13:22.097 { 00:13:22.097 "name": null, 00:13:22.097 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:22.097 "is_configured": false, 00:13:22.097 "data_offset": 0, 00:13:22.097 "data_size": 63488 00:13:22.097 }, 00:13:22.097 { 00:13:22.097 "name": "BaseBdev3", 00:13:22.097 "uuid": "ab52e580-2aff-5e0e-b90d-de975ed308b0", 00:13:22.097 "is_configured": true, 00:13:22.097 "data_offset": 2048, 00:13:22.097 "data_size": 63488 00:13:22.097 }, 00:13:22.097 { 00:13:22.097 "name": "BaseBdev4", 00:13:22.097 "uuid": "fc7f04ce-f3db-5f39-b213-f077bbcd8f14", 00:13:22.097 "is_configured": true, 00:13:22.097 "data_offset": 2048, 00:13:22.097 "data_size": 63488 00:13:22.097 } 00:13:22.097 ] 00:13:22.097 }' 00:13:22.097 20:08:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:22.097 20:08:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:22.097 20:08:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:22.097 20:08:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:22.097 20:08:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:13:22.097 20:08:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:22.097 20:08:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:22.097 20:08:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:22.097 20:08:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:22.097 20:08:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:22.097 20:08:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:22.097 20:08:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:22.097 20:08:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.097 20:08:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.097 20:08:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.097 20:08:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:22.097 "name": "raid_bdev1", 00:13:22.097 "uuid": "207c0dfd-2f2a-4e06-a231-26c343310558", 00:13:22.097 "strip_size_kb": 0, 00:13:22.097 "state": "online", 00:13:22.097 "raid_level": "raid1", 00:13:22.097 "superblock": true, 00:13:22.097 "num_base_bdevs": 4, 00:13:22.097 "num_base_bdevs_discovered": 3, 00:13:22.097 "num_base_bdevs_operational": 3, 00:13:22.097 "base_bdevs_list": [ 00:13:22.097 { 00:13:22.097 "name": "spare", 00:13:22.097 "uuid": "c56741ed-2969-5139-b1ae-5d921c8a6366", 00:13:22.097 "is_configured": true, 00:13:22.097 "data_offset": 2048, 00:13:22.097 "data_size": 63488 00:13:22.097 }, 00:13:22.097 { 00:13:22.097 "name": null, 00:13:22.097 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:22.097 "is_configured": false, 00:13:22.097 "data_offset": 0, 00:13:22.097 "data_size": 63488 00:13:22.097 }, 00:13:22.097 { 00:13:22.097 "name": "BaseBdev3", 00:13:22.097 "uuid": "ab52e580-2aff-5e0e-b90d-de975ed308b0", 00:13:22.097 "is_configured": true, 00:13:22.097 "data_offset": 2048, 00:13:22.097 "data_size": 63488 00:13:22.097 }, 00:13:22.097 { 00:13:22.097 "name": "BaseBdev4", 00:13:22.097 "uuid": "fc7f04ce-f3db-5f39-b213-f077bbcd8f14", 00:13:22.098 "is_configured": true, 00:13:22.098 "data_offset": 2048, 00:13:22.098 "data_size": 63488 00:13:22.098 } 00:13:22.098 ] 00:13:22.098 }' 00:13:22.098 20:08:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:22.098 20:08:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:22.098 20:08:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:22.098 20:08:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:22.098 20:08:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:22.098 20:08:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:22.098 20:08:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:22.098 20:08:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:22.098 20:08:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:22.098 20:08:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:22.098 20:08:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:22.098 20:08:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:22.098 20:08:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:22.098 20:08:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:22.098 20:08:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:22.098 20:08:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.098 20:08:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.098 20:08:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:22.098 20:08:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.356 20:08:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:22.356 "name": "raid_bdev1", 00:13:22.356 "uuid": "207c0dfd-2f2a-4e06-a231-26c343310558", 00:13:22.356 "strip_size_kb": 0, 00:13:22.356 "state": "online", 00:13:22.356 "raid_level": "raid1", 00:13:22.356 "superblock": true, 00:13:22.356 "num_base_bdevs": 4, 00:13:22.356 "num_base_bdevs_discovered": 3, 00:13:22.356 "num_base_bdevs_operational": 3, 00:13:22.356 "base_bdevs_list": [ 00:13:22.356 { 00:13:22.356 "name": "spare", 00:13:22.356 "uuid": "c56741ed-2969-5139-b1ae-5d921c8a6366", 00:13:22.356 "is_configured": true, 00:13:22.356 "data_offset": 2048, 00:13:22.356 "data_size": 63488 00:13:22.356 }, 00:13:22.356 { 00:13:22.356 "name": null, 00:13:22.356 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:22.356 "is_configured": false, 00:13:22.356 "data_offset": 0, 00:13:22.356 "data_size": 63488 00:13:22.356 }, 00:13:22.356 { 00:13:22.356 "name": "BaseBdev3", 00:13:22.356 "uuid": "ab52e580-2aff-5e0e-b90d-de975ed308b0", 00:13:22.356 "is_configured": true, 00:13:22.356 "data_offset": 2048, 00:13:22.356 "data_size": 63488 00:13:22.356 }, 00:13:22.356 { 00:13:22.356 "name": "BaseBdev4", 00:13:22.356 "uuid": "fc7f04ce-f3db-5f39-b213-f077bbcd8f14", 00:13:22.356 "is_configured": true, 00:13:22.356 "data_offset": 2048, 00:13:22.356 "data_size": 63488 00:13:22.356 } 00:13:22.356 ] 00:13:22.356 }' 00:13:22.356 20:08:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:22.356 20:08:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.615 20:08:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:22.615 20:08:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.615 20:08:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.615 [2024-12-08 20:08:54.482084] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:22.615 [2024-12-08 20:08:54.482164] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:22.615 [2024-12-08 20:08:54.482296] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:22.615 [2024-12-08 20:08:54.482429] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:22.615 [2024-12-08 20:08:54.482476] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:22.615 20:08:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.615 20:08:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:22.615 20:08:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.615 20:08:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:13:22.615 20:08:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.615 20:08:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.615 20:08:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:22.615 20:08:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:22.615 20:08:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:13:22.615 20:08:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:13:22.616 20:08:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:22.616 20:08:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:13:22.616 20:08:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:22.616 20:08:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:22.616 20:08:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:22.616 20:08:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:13:22.616 20:08:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:22.616 20:08:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:22.616 20:08:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:13:22.876 /dev/nbd0 00:13:22.876 20:08:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:22.876 20:08:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:22.876 20:08:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:22.876 20:08:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:13:22.876 20:08:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:22.876 20:08:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:22.876 20:08:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:22.876 20:08:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:13:22.876 20:08:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:22.876 20:08:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:22.876 20:08:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:22.876 1+0 records in 00:13:22.876 1+0 records out 00:13:22.876 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000219596 s, 18.7 MB/s 00:13:22.876 20:08:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:22.876 20:08:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:13:22.876 20:08:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:22.876 20:08:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:22.876 20:08:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:13:22.876 20:08:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:22.876 20:08:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:22.876 20:08:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:13:23.136 /dev/nbd1 00:13:23.136 20:08:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:23.136 20:08:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:23.136 20:08:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:23.136 20:08:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:13:23.136 20:08:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:23.136 20:08:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:23.136 20:08:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:23.136 20:08:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:13:23.136 20:08:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:23.136 20:08:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:23.136 20:08:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:23.136 1+0 records in 00:13:23.136 1+0 records out 00:13:23.136 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000380563 s, 10.8 MB/s 00:13:23.136 20:08:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:23.136 20:08:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:13:23.136 20:08:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:23.136 20:08:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:23.136 20:08:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:13:23.136 20:08:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:23.136 20:08:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:23.136 20:08:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:13:23.394 20:08:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:13:23.394 20:08:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:23.394 20:08:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:23.394 20:08:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:23.394 20:08:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:13:23.394 20:08:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:23.394 20:08:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:23.652 20:08:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:23.652 20:08:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:23.652 20:08:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:23.652 20:08:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:23.652 20:08:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:23.652 20:08:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:23.652 20:08:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:23.652 20:08:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:23.652 20:08:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:23.652 20:08:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:23.652 20:08:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:23.652 20:08:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:23.652 20:08:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:23.652 20:08:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:23.652 20:08:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:23.652 20:08:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:23.652 20:08:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:23.652 20:08:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:23.652 20:08:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:13:23.652 20:08:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:13:23.652 20:08:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.652 20:08:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:23.910 20:08:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.910 20:08:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:23.910 20:08:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.910 20:08:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:23.910 [2024-12-08 20:08:55.645298] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:23.910 [2024-12-08 20:08:55.645351] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:23.911 [2024-12-08 20:08:55.645376] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:13:23.911 [2024-12-08 20:08:55.645384] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:23.911 [2024-12-08 20:08:55.647730] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:23.911 [2024-12-08 20:08:55.647803] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:23.911 [2024-12-08 20:08:55.647928] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:23.911 [2024-12-08 20:08:55.648037] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:23.911 [2024-12-08 20:08:55.648255] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:23.911 [2024-12-08 20:08:55.648395] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:23.911 spare 00:13:23.911 20:08:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.911 20:08:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:13:23.911 20:08:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.911 20:08:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:23.911 [2024-12-08 20:08:55.748330] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:13:23.911 [2024-12-08 20:08:55.748351] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:23.911 [2024-12-08 20:08:55.748604] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:13:23.911 [2024-12-08 20:08:55.748757] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:13:23.911 [2024-12-08 20:08:55.748769] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:13:23.911 [2024-12-08 20:08:55.748916] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:23.911 20:08:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.911 20:08:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:23.911 20:08:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:23.911 20:08:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:23.911 20:08:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:23.911 20:08:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:23.911 20:08:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:23.911 20:08:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:23.911 20:08:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:23.911 20:08:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:23.911 20:08:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:23.911 20:08:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:23.911 20:08:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.911 20:08:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:23.911 20:08:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:23.911 20:08:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.911 20:08:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:23.911 "name": "raid_bdev1", 00:13:23.911 "uuid": "207c0dfd-2f2a-4e06-a231-26c343310558", 00:13:23.911 "strip_size_kb": 0, 00:13:23.911 "state": "online", 00:13:23.911 "raid_level": "raid1", 00:13:23.911 "superblock": true, 00:13:23.911 "num_base_bdevs": 4, 00:13:23.911 "num_base_bdevs_discovered": 3, 00:13:23.911 "num_base_bdevs_operational": 3, 00:13:23.911 "base_bdevs_list": [ 00:13:23.911 { 00:13:23.911 "name": "spare", 00:13:23.911 "uuid": "c56741ed-2969-5139-b1ae-5d921c8a6366", 00:13:23.911 "is_configured": true, 00:13:23.911 "data_offset": 2048, 00:13:23.911 "data_size": 63488 00:13:23.911 }, 00:13:23.911 { 00:13:23.911 "name": null, 00:13:23.911 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:23.911 "is_configured": false, 00:13:23.911 "data_offset": 2048, 00:13:23.911 "data_size": 63488 00:13:23.911 }, 00:13:23.911 { 00:13:23.911 "name": "BaseBdev3", 00:13:23.911 "uuid": "ab52e580-2aff-5e0e-b90d-de975ed308b0", 00:13:23.911 "is_configured": true, 00:13:23.911 "data_offset": 2048, 00:13:23.911 "data_size": 63488 00:13:23.911 }, 00:13:23.911 { 00:13:23.911 "name": "BaseBdev4", 00:13:23.911 "uuid": "fc7f04ce-f3db-5f39-b213-f077bbcd8f14", 00:13:23.911 "is_configured": true, 00:13:23.911 "data_offset": 2048, 00:13:23.911 "data_size": 63488 00:13:23.911 } 00:13:23.911 ] 00:13:23.911 }' 00:13:23.911 20:08:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:23.911 20:08:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.479 20:08:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:24.479 20:08:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:24.479 20:08:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:24.479 20:08:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:24.479 20:08:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:24.479 20:08:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:24.479 20:08:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:24.479 20:08:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.479 20:08:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.479 20:08:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.479 20:08:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:24.479 "name": "raid_bdev1", 00:13:24.479 "uuid": "207c0dfd-2f2a-4e06-a231-26c343310558", 00:13:24.479 "strip_size_kb": 0, 00:13:24.479 "state": "online", 00:13:24.479 "raid_level": "raid1", 00:13:24.479 "superblock": true, 00:13:24.479 "num_base_bdevs": 4, 00:13:24.479 "num_base_bdevs_discovered": 3, 00:13:24.479 "num_base_bdevs_operational": 3, 00:13:24.479 "base_bdevs_list": [ 00:13:24.479 { 00:13:24.479 "name": "spare", 00:13:24.479 "uuid": "c56741ed-2969-5139-b1ae-5d921c8a6366", 00:13:24.479 "is_configured": true, 00:13:24.479 "data_offset": 2048, 00:13:24.479 "data_size": 63488 00:13:24.479 }, 00:13:24.479 { 00:13:24.479 "name": null, 00:13:24.479 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:24.479 "is_configured": false, 00:13:24.479 "data_offset": 2048, 00:13:24.479 "data_size": 63488 00:13:24.479 }, 00:13:24.479 { 00:13:24.479 "name": "BaseBdev3", 00:13:24.479 "uuid": "ab52e580-2aff-5e0e-b90d-de975ed308b0", 00:13:24.479 "is_configured": true, 00:13:24.479 "data_offset": 2048, 00:13:24.479 "data_size": 63488 00:13:24.479 }, 00:13:24.479 { 00:13:24.479 "name": "BaseBdev4", 00:13:24.479 "uuid": "fc7f04ce-f3db-5f39-b213-f077bbcd8f14", 00:13:24.479 "is_configured": true, 00:13:24.479 "data_offset": 2048, 00:13:24.479 "data_size": 63488 00:13:24.479 } 00:13:24.479 ] 00:13:24.479 }' 00:13:24.479 20:08:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:24.479 20:08:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:24.479 20:08:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:24.479 20:08:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:24.479 20:08:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:13:24.479 20:08:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:24.479 20:08:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.479 20:08:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.479 20:08:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.479 20:08:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:13:24.479 20:08:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:24.479 20:08:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.479 20:08:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.479 [2024-12-08 20:08:56.388102] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:24.479 20:08:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.479 20:08:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:24.479 20:08:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:24.480 20:08:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:24.480 20:08:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:24.480 20:08:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:24.480 20:08:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:24.480 20:08:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:24.480 20:08:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:24.480 20:08:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:24.480 20:08:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:24.480 20:08:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:24.480 20:08:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.480 20:08:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.480 20:08:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:24.480 20:08:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.480 20:08:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:24.480 "name": "raid_bdev1", 00:13:24.480 "uuid": "207c0dfd-2f2a-4e06-a231-26c343310558", 00:13:24.480 "strip_size_kb": 0, 00:13:24.480 "state": "online", 00:13:24.480 "raid_level": "raid1", 00:13:24.480 "superblock": true, 00:13:24.480 "num_base_bdevs": 4, 00:13:24.480 "num_base_bdevs_discovered": 2, 00:13:24.480 "num_base_bdevs_operational": 2, 00:13:24.480 "base_bdevs_list": [ 00:13:24.480 { 00:13:24.480 "name": null, 00:13:24.480 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:24.480 "is_configured": false, 00:13:24.480 "data_offset": 0, 00:13:24.480 "data_size": 63488 00:13:24.480 }, 00:13:24.480 { 00:13:24.480 "name": null, 00:13:24.480 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:24.480 "is_configured": false, 00:13:24.480 "data_offset": 2048, 00:13:24.480 "data_size": 63488 00:13:24.480 }, 00:13:24.480 { 00:13:24.480 "name": "BaseBdev3", 00:13:24.480 "uuid": "ab52e580-2aff-5e0e-b90d-de975ed308b0", 00:13:24.480 "is_configured": true, 00:13:24.480 "data_offset": 2048, 00:13:24.480 "data_size": 63488 00:13:24.480 }, 00:13:24.480 { 00:13:24.480 "name": "BaseBdev4", 00:13:24.480 "uuid": "fc7f04ce-f3db-5f39-b213-f077bbcd8f14", 00:13:24.480 "is_configured": true, 00:13:24.480 "data_offset": 2048, 00:13:24.480 "data_size": 63488 00:13:24.480 } 00:13:24.480 ] 00:13:24.480 }' 00:13:24.480 20:08:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:24.480 20:08:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.046 20:08:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:25.046 20:08:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.046 20:08:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.046 [2024-12-08 20:08:56.879288] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:25.046 [2024-12-08 20:08:56.879574] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:13:25.046 [2024-12-08 20:08:56.879637] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:25.046 [2024-12-08 20:08:56.879717] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:25.046 [2024-12-08 20:08:56.893791] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1d50 00:13:25.046 20:08:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.046 20:08:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:13:25.046 [2024-12-08 20:08:56.895685] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:26.000 20:08:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:26.000 20:08:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:26.000 20:08:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:26.000 20:08:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:26.000 20:08:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:26.000 20:08:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:26.000 20:08:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:26.000 20:08:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.000 20:08:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.000 20:08:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.000 20:08:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:26.000 "name": "raid_bdev1", 00:13:26.000 "uuid": "207c0dfd-2f2a-4e06-a231-26c343310558", 00:13:26.000 "strip_size_kb": 0, 00:13:26.000 "state": "online", 00:13:26.000 "raid_level": "raid1", 00:13:26.000 "superblock": true, 00:13:26.000 "num_base_bdevs": 4, 00:13:26.000 "num_base_bdevs_discovered": 3, 00:13:26.000 "num_base_bdevs_operational": 3, 00:13:26.000 "process": { 00:13:26.000 "type": "rebuild", 00:13:26.000 "target": "spare", 00:13:26.000 "progress": { 00:13:26.000 "blocks": 20480, 00:13:26.000 "percent": 32 00:13:26.000 } 00:13:26.000 }, 00:13:26.000 "base_bdevs_list": [ 00:13:26.000 { 00:13:26.000 "name": "spare", 00:13:26.000 "uuid": "c56741ed-2969-5139-b1ae-5d921c8a6366", 00:13:26.000 "is_configured": true, 00:13:26.000 "data_offset": 2048, 00:13:26.000 "data_size": 63488 00:13:26.000 }, 00:13:26.000 { 00:13:26.000 "name": null, 00:13:26.000 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:26.000 "is_configured": false, 00:13:26.000 "data_offset": 2048, 00:13:26.000 "data_size": 63488 00:13:26.000 }, 00:13:26.000 { 00:13:26.000 "name": "BaseBdev3", 00:13:26.000 "uuid": "ab52e580-2aff-5e0e-b90d-de975ed308b0", 00:13:26.000 "is_configured": true, 00:13:26.000 "data_offset": 2048, 00:13:26.000 "data_size": 63488 00:13:26.000 }, 00:13:26.000 { 00:13:26.000 "name": "BaseBdev4", 00:13:26.000 "uuid": "fc7f04ce-f3db-5f39-b213-f077bbcd8f14", 00:13:26.000 "is_configured": true, 00:13:26.000 "data_offset": 2048, 00:13:26.000 "data_size": 63488 00:13:26.000 } 00:13:26.000 ] 00:13:26.000 }' 00:13:26.000 20:08:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:26.258 20:08:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:26.258 20:08:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:26.258 20:08:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:26.258 20:08:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:13:26.258 20:08:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.258 20:08:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.258 [2024-12-08 20:08:58.027148] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:26.258 [2024-12-08 20:08:58.100494] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:26.258 [2024-12-08 20:08:58.100598] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:26.258 [2024-12-08 20:08:58.100638] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:26.258 [2024-12-08 20:08:58.100660] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:26.258 20:08:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.258 20:08:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:26.258 20:08:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:26.258 20:08:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:26.258 20:08:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:26.258 20:08:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:26.258 20:08:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:26.258 20:08:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:26.258 20:08:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:26.258 20:08:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:26.258 20:08:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:26.258 20:08:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:26.258 20:08:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:26.258 20:08:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.258 20:08:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.258 20:08:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.258 20:08:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:26.258 "name": "raid_bdev1", 00:13:26.258 "uuid": "207c0dfd-2f2a-4e06-a231-26c343310558", 00:13:26.258 "strip_size_kb": 0, 00:13:26.258 "state": "online", 00:13:26.258 "raid_level": "raid1", 00:13:26.258 "superblock": true, 00:13:26.258 "num_base_bdevs": 4, 00:13:26.258 "num_base_bdevs_discovered": 2, 00:13:26.258 "num_base_bdevs_operational": 2, 00:13:26.258 "base_bdevs_list": [ 00:13:26.258 { 00:13:26.258 "name": null, 00:13:26.258 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:26.258 "is_configured": false, 00:13:26.258 "data_offset": 0, 00:13:26.258 "data_size": 63488 00:13:26.258 }, 00:13:26.258 { 00:13:26.258 "name": null, 00:13:26.258 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:26.258 "is_configured": false, 00:13:26.258 "data_offset": 2048, 00:13:26.258 "data_size": 63488 00:13:26.258 }, 00:13:26.258 { 00:13:26.258 "name": "BaseBdev3", 00:13:26.258 "uuid": "ab52e580-2aff-5e0e-b90d-de975ed308b0", 00:13:26.258 "is_configured": true, 00:13:26.258 "data_offset": 2048, 00:13:26.258 "data_size": 63488 00:13:26.258 }, 00:13:26.258 { 00:13:26.258 "name": "BaseBdev4", 00:13:26.258 "uuid": "fc7f04ce-f3db-5f39-b213-f077bbcd8f14", 00:13:26.258 "is_configured": true, 00:13:26.258 "data_offset": 2048, 00:13:26.258 "data_size": 63488 00:13:26.258 } 00:13:26.258 ] 00:13:26.258 }' 00:13:26.258 20:08:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:26.258 20:08:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.824 20:08:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:26.824 20:08:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.824 20:08:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.824 [2024-12-08 20:08:58.597468] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:26.824 [2024-12-08 20:08:58.597523] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:26.824 [2024-12-08 20:08:58.597553] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:13:26.824 [2024-12-08 20:08:58.597563] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:26.824 [2024-12-08 20:08:58.598155] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:26.824 [2024-12-08 20:08:58.598185] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:26.824 [2024-12-08 20:08:58.598283] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:26.824 [2024-12-08 20:08:58.598295] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:13:26.824 [2024-12-08 20:08:58.598309] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:26.824 [2024-12-08 20:08:58.598328] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:26.824 [2024-12-08 20:08:58.612962] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1e20 00:13:26.824 spare 00:13:26.824 20:08:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.824 20:08:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:13:26.824 [2024-12-08 20:08:58.614814] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:27.793 20:08:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:27.793 20:08:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:27.793 20:08:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:27.793 20:08:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:27.793 20:08:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:27.793 20:08:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.793 20:08:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:27.793 20:08:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.793 20:08:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.793 20:08:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.793 20:08:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:27.793 "name": "raid_bdev1", 00:13:27.793 "uuid": "207c0dfd-2f2a-4e06-a231-26c343310558", 00:13:27.793 "strip_size_kb": 0, 00:13:27.793 "state": "online", 00:13:27.793 "raid_level": "raid1", 00:13:27.793 "superblock": true, 00:13:27.793 "num_base_bdevs": 4, 00:13:27.793 "num_base_bdevs_discovered": 3, 00:13:27.793 "num_base_bdevs_operational": 3, 00:13:27.793 "process": { 00:13:27.793 "type": "rebuild", 00:13:27.793 "target": "spare", 00:13:27.793 "progress": { 00:13:27.793 "blocks": 20480, 00:13:27.793 "percent": 32 00:13:27.793 } 00:13:27.793 }, 00:13:27.793 "base_bdevs_list": [ 00:13:27.793 { 00:13:27.793 "name": "spare", 00:13:27.793 "uuid": "c56741ed-2969-5139-b1ae-5d921c8a6366", 00:13:27.793 "is_configured": true, 00:13:27.793 "data_offset": 2048, 00:13:27.793 "data_size": 63488 00:13:27.793 }, 00:13:27.793 { 00:13:27.793 "name": null, 00:13:27.793 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:27.793 "is_configured": false, 00:13:27.793 "data_offset": 2048, 00:13:27.793 "data_size": 63488 00:13:27.793 }, 00:13:27.793 { 00:13:27.793 "name": "BaseBdev3", 00:13:27.793 "uuid": "ab52e580-2aff-5e0e-b90d-de975ed308b0", 00:13:27.793 "is_configured": true, 00:13:27.793 "data_offset": 2048, 00:13:27.793 "data_size": 63488 00:13:27.793 }, 00:13:27.793 { 00:13:27.793 "name": "BaseBdev4", 00:13:27.793 "uuid": "fc7f04ce-f3db-5f39-b213-f077bbcd8f14", 00:13:27.793 "is_configured": true, 00:13:27.793 "data_offset": 2048, 00:13:27.793 "data_size": 63488 00:13:27.793 } 00:13:27.793 ] 00:13:27.793 }' 00:13:27.793 20:08:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:27.793 20:08:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:27.793 20:08:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:27.793 20:08:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:27.793 20:08:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:13:27.793 20:08:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.793 20:08:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.053 [2024-12-08 20:08:59.774163] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:28.053 [2024-12-08 20:08:59.819677] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:28.053 [2024-12-08 20:08:59.819737] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:28.053 [2024-12-08 20:08:59.819753] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:28.053 [2024-12-08 20:08:59.819762] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:28.053 20:08:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.053 20:08:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:28.053 20:08:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:28.053 20:08:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:28.053 20:08:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:28.053 20:08:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:28.053 20:08:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:28.053 20:08:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:28.053 20:08:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:28.053 20:08:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:28.053 20:08:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:28.053 20:08:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:28.053 20:08:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:28.053 20:08:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.053 20:08:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.053 20:08:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.053 20:08:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:28.053 "name": "raid_bdev1", 00:13:28.053 "uuid": "207c0dfd-2f2a-4e06-a231-26c343310558", 00:13:28.053 "strip_size_kb": 0, 00:13:28.053 "state": "online", 00:13:28.053 "raid_level": "raid1", 00:13:28.053 "superblock": true, 00:13:28.053 "num_base_bdevs": 4, 00:13:28.053 "num_base_bdevs_discovered": 2, 00:13:28.053 "num_base_bdevs_operational": 2, 00:13:28.053 "base_bdevs_list": [ 00:13:28.053 { 00:13:28.053 "name": null, 00:13:28.053 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:28.053 "is_configured": false, 00:13:28.053 "data_offset": 0, 00:13:28.053 "data_size": 63488 00:13:28.053 }, 00:13:28.053 { 00:13:28.053 "name": null, 00:13:28.053 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:28.053 "is_configured": false, 00:13:28.053 "data_offset": 2048, 00:13:28.053 "data_size": 63488 00:13:28.053 }, 00:13:28.053 { 00:13:28.053 "name": "BaseBdev3", 00:13:28.053 "uuid": "ab52e580-2aff-5e0e-b90d-de975ed308b0", 00:13:28.053 "is_configured": true, 00:13:28.053 "data_offset": 2048, 00:13:28.053 "data_size": 63488 00:13:28.053 }, 00:13:28.053 { 00:13:28.053 "name": "BaseBdev4", 00:13:28.053 "uuid": "fc7f04ce-f3db-5f39-b213-f077bbcd8f14", 00:13:28.053 "is_configured": true, 00:13:28.053 "data_offset": 2048, 00:13:28.053 "data_size": 63488 00:13:28.053 } 00:13:28.053 ] 00:13:28.053 }' 00:13:28.053 20:08:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:28.053 20:08:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.618 20:09:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:28.618 20:09:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:28.618 20:09:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:28.618 20:09:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:28.618 20:09:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:28.618 20:09:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:28.618 20:09:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:28.618 20:09:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.618 20:09:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.618 20:09:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.618 20:09:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:28.618 "name": "raid_bdev1", 00:13:28.618 "uuid": "207c0dfd-2f2a-4e06-a231-26c343310558", 00:13:28.618 "strip_size_kb": 0, 00:13:28.618 "state": "online", 00:13:28.618 "raid_level": "raid1", 00:13:28.618 "superblock": true, 00:13:28.618 "num_base_bdevs": 4, 00:13:28.618 "num_base_bdevs_discovered": 2, 00:13:28.618 "num_base_bdevs_operational": 2, 00:13:28.618 "base_bdevs_list": [ 00:13:28.618 { 00:13:28.618 "name": null, 00:13:28.618 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:28.618 "is_configured": false, 00:13:28.618 "data_offset": 0, 00:13:28.618 "data_size": 63488 00:13:28.618 }, 00:13:28.618 { 00:13:28.618 "name": null, 00:13:28.619 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:28.619 "is_configured": false, 00:13:28.619 "data_offset": 2048, 00:13:28.619 "data_size": 63488 00:13:28.619 }, 00:13:28.619 { 00:13:28.619 "name": "BaseBdev3", 00:13:28.619 "uuid": "ab52e580-2aff-5e0e-b90d-de975ed308b0", 00:13:28.619 "is_configured": true, 00:13:28.619 "data_offset": 2048, 00:13:28.619 "data_size": 63488 00:13:28.619 }, 00:13:28.619 { 00:13:28.619 "name": "BaseBdev4", 00:13:28.619 "uuid": "fc7f04ce-f3db-5f39-b213-f077bbcd8f14", 00:13:28.619 "is_configured": true, 00:13:28.619 "data_offset": 2048, 00:13:28.619 "data_size": 63488 00:13:28.619 } 00:13:28.619 ] 00:13:28.619 }' 00:13:28.619 20:09:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:28.619 20:09:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:28.619 20:09:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:28.619 20:09:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:28.619 20:09:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:13:28.619 20:09:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.619 20:09:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.619 20:09:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.619 20:09:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:28.619 20:09:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.619 20:09:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.619 [2024-12-08 20:09:00.468785] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:28.619 [2024-12-08 20:09:00.468891] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:28.619 [2024-12-08 20:09:00.468917] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:13:28.619 [2024-12-08 20:09:00.468928] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:28.619 [2024-12-08 20:09:00.469428] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:28.619 [2024-12-08 20:09:00.469451] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:28.619 [2024-12-08 20:09:00.469531] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:13:28.619 [2024-12-08 20:09:00.469548] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:13:28.619 [2024-12-08 20:09:00.469558] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:28.619 [2024-12-08 20:09:00.469583] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:13:28.619 BaseBdev1 00:13:28.619 20:09:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.619 20:09:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:13:29.553 20:09:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:29.553 20:09:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:29.553 20:09:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:29.553 20:09:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:29.553 20:09:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:29.553 20:09:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:29.553 20:09:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:29.553 20:09:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:29.553 20:09:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:29.553 20:09:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:29.553 20:09:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:29.553 20:09:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:29.553 20:09:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.553 20:09:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.553 20:09:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.814 20:09:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:29.814 "name": "raid_bdev1", 00:13:29.814 "uuid": "207c0dfd-2f2a-4e06-a231-26c343310558", 00:13:29.814 "strip_size_kb": 0, 00:13:29.814 "state": "online", 00:13:29.814 "raid_level": "raid1", 00:13:29.814 "superblock": true, 00:13:29.814 "num_base_bdevs": 4, 00:13:29.814 "num_base_bdevs_discovered": 2, 00:13:29.814 "num_base_bdevs_operational": 2, 00:13:29.814 "base_bdevs_list": [ 00:13:29.814 { 00:13:29.814 "name": null, 00:13:29.814 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:29.814 "is_configured": false, 00:13:29.814 "data_offset": 0, 00:13:29.814 "data_size": 63488 00:13:29.814 }, 00:13:29.814 { 00:13:29.814 "name": null, 00:13:29.814 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:29.814 "is_configured": false, 00:13:29.814 "data_offset": 2048, 00:13:29.814 "data_size": 63488 00:13:29.814 }, 00:13:29.814 { 00:13:29.814 "name": "BaseBdev3", 00:13:29.814 "uuid": "ab52e580-2aff-5e0e-b90d-de975ed308b0", 00:13:29.814 "is_configured": true, 00:13:29.814 "data_offset": 2048, 00:13:29.814 "data_size": 63488 00:13:29.814 }, 00:13:29.814 { 00:13:29.814 "name": "BaseBdev4", 00:13:29.814 "uuid": "fc7f04ce-f3db-5f39-b213-f077bbcd8f14", 00:13:29.814 "is_configured": true, 00:13:29.814 "data_offset": 2048, 00:13:29.814 "data_size": 63488 00:13:29.814 } 00:13:29.814 ] 00:13:29.814 }' 00:13:29.814 20:09:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:29.814 20:09:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.074 20:09:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:30.074 20:09:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:30.074 20:09:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:30.074 20:09:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:30.074 20:09:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:30.074 20:09:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:30.074 20:09:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:30.074 20:09:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.074 20:09:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.074 20:09:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.074 20:09:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:30.074 "name": "raid_bdev1", 00:13:30.074 "uuid": "207c0dfd-2f2a-4e06-a231-26c343310558", 00:13:30.074 "strip_size_kb": 0, 00:13:30.074 "state": "online", 00:13:30.074 "raid_level": "raid1", 00:13:30.074 "superblock": true, 00:13:30.074 "num_base_bdevs": 4, 00:13:30.074 "num_base_bdevs_discovered": 2, 00:13:30.074 "num_base_bdevs_operational": 2, 00:13:30.074 "base_bdevs_list": [ 00:13:30.074 { 00:13:30.074 "name": null, 00:13:30.074 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:30.074 "is_configured": false, 00:13:30.074 "data_offset": 0, 00:13:30.074 "data_size": 63488 00:13:30.074 }, 00:13:30.074 { 00:13:30.074 "name": null, 00:13:30.074 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:30.074 "is_configured": false, 00:13:30.074 "data_offset": 2048, 00:13:30.074 "data_size": 63488 00:13:30.074 }, 00:13:30.074 { 00:13:30.074 "name": "BaseBdev3", 00:13:30.074 "uuid": "ab52e580-2aff-5e0e-b90d-de975ed308b0", 00:13:30.074 "is_configured": true, 00:13:30.074 "data_offset": 2048, 00:13:30.074 "data_size": 63488 00:13:30.074 }, 00:13:30.074 { 00:13:30.074 "name": "BaseBdev4", 00:13:30.074 "uuid": "fc7f04ce-f3db-5f39-b213-f077bbcd8f14", 00:13:30.074 "is_configured": true, 00:13:30.074 "data_offset": 2048, 00:13:30.074 "data_size": 63488 00:13:30.074 } 00:13:30.074 ] 00:13:30.074 }' 00:13:30.074 20:09:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:30.074 20:09:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:30.074 20:09:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:30.334 20:09:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:30.334 20:09:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:30.334 20:09:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:13:30.334 20:09:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:30.334 20:09:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:13:30.334 20:09:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:30.334 20:09:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:13:30.334 20:09:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:30.334 20:09:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:30.334 20:09:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.334 20:09:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.334 [2024-12-08 20:09:02.094429] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:30.334 [2024-12-08 20:09:02.094632] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:13:30.334 [2024-12-08 20:09:02.094647] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:30.334 request: 00:13:30.334 { 00:13:30.334 "base_bdev": "BaseBdev1", 00:13:30.334 "raid_bdev": "raid_bdev1", 00:13:30.334 "method": "bdev_raid_add_base_bdev", 00:13:30.334 "req_id": 1 00:13:30.334 } 00:13:30.334 Got JSON-RPC error response 00:13:30.334 response: 00:13:30.334 { 00:13:30.334 "code": -22, 00:13:30.334 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:13:30.334 } 00:13:30.334 20:09:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:13:30.334 20:09:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:13:30.334 20:09:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:30.334 20:09:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:30.334 20:09:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:30.334 20:09:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:13:31.274 20:09:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:31.274 20:09:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:31.274 20:09:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:31.274 20:09:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:31.274 20:09:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:31.274 20:09:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:31.274 20:09:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:31.274 20:09:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:31.274 20:09:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:31.274 20:09:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:31.274 20:09:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:31.274 20:09:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:31.274 20:09:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.274 20:09:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.274 20:09:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.274 20:09:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:31.274 "name": "raid_bdev1", 00:13:31.274 "uuid": "207c0dfd-2f2a-4e06-a231-26c343310558", 00:13:31.274 "strip_size_kb": 0, 00:13:31.274 "state": "online", 00:13:31.274 "raid_level": "raid1", 00:13:31.274 "superblock": true, 00:13:31.274 "num_base_bdevs": 4, 00:13:31.274 "num_base_bdevs_discovered": 2, 00:13:31.274 "num_base_bdevs_operational": 2, 00:13:31.274 "base_bdevs_list": [ 00:13:31.274 { 00:13:31.274 "name": null, 00:13:31.274 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:31.274 "is_configured": false, 00:13:31.274 "data_offset": 0, 00:13:31.274 "data_size": 63488 00:13:31.274 }, 00:13:31.274 { 00:13:31.274 "name": null, 00:13:31.274 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:31.274 "is_configured": false, 00:13:31.274 "data_offset": 2048, 00:13:31.274 "data_size": 63488 00:13:31.274 }, 00:13:31.274 { 00:13:31.274 "name": "BaseBdev3", 00:13:31.274 "uuid": "ab52e580-2aff-5e0e-b90d-de975ed308b0", 00:13:31.274 "is_configured": true, 00:13:31.274 "data_offset": 2048, 00:13:31.274 "data_size": 63488 00:13:31.274 }, 00:13:31.274 { 00:13:31.274 "name": "BaseBdev4", 00:13:31.274 "uuid": "fc7f04ce-f3db-5f39-b213-f077bbcd8f14", 00:13:31.274 "is_configured": true, 00:13:31.274 "data_offset": 2048, 00:13:31.274 "data_size": 63488 00:13:31.274 } 00:13:31.274 ] 00:13:31.274 }' 00:13:31.274 20:09:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:31.274 20:09:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.534 20:09:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:31.534 20:09:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:31.534 20:09:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:31.534 20:09:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:31.534 20:09:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:31.534 20:09:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:31.534 20:09:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:31.534 20:09:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.534 20:09:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.534 20:09:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.794 20:09:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:31.794 "name": "raid_bdev1", 00:13:31.794 "uuid": "207c0dfd-2f2a-4e06-a231-26c343310558", 00:13:31.794 "strip_size_kb": 0, 00:13:31.794 "state": "online", 00:13:31.794 "raid_level": "raid1", 00:13:31.794 "superblock": true, 00:13:31.794 "num_base_bdevs": 4, 00:13:31.794 "num_base_bdevs_discovered": 2, 00:13:31.794 "num_base_bdevs_operational": 2, 00:13:31.794 "base_bdevs_list": [ 00:13:31.794 { 00:13:31.794 "name": null, 00:13:31.794 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:31.794 "is_configured": false, 00:13:31.794 "data_offset": 0, 00:13:31.794 "data_size": 63488 00:13:31.794 }, 00:13:31.794 { 00:13:31.794 "name": null, 00:13:31.794 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:31.794 "is_configured": false, 00:13:31.794 "data_offset": 2048, 00:13:31.794 "data_size": 63488 00:13:31.794 }, 00:13:31.794 { 00:13:31.794 "name": "BaseBdev3", 00:13:31.794 "uuid": "ab52e580-2aff-5e0e-b90d-de975ed308b0", 00:13:31.794 "is_configured": true, 00:13:31.794 "data_offset": 2048, 00:13:31.794 "data_size": 63488 00:13:31.794 }, 00:13:31.794 { 00:13:31.794 "name": "BaseBdev4", 00:13:31.794 "uuid": "fc7f04ce-f3db-5f39-b213-f077bbcd8f14", 00:13:31.794 "is_configured": true, 00:13:31.794 "data_offset": 2048, 00:13:31.794 "data_size": 63488 00:13:31.794 } 00:13:31.794 ] 00:13:31.794 }' 00:13:31.794 20:09:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:31.794 20:09:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:31.794 20:09:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:31.794 20:09:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:31.794 20:09:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 77710 00:13:31.794 20:09:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 77710 ']' 00:13:31.794 20:09:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 77710 00:13:31.794 20:09:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:13:31.794 20:09:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:31.794 20:09:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77710 00:13:31.794 20:09:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:31.794 20:09:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:31.794 20:09:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77710' 00:13:31.794 killing process with pid 77710 00:13:31.794 20:09:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 77710 00:13:31.794 Received shutdown signal, test time was about 60.000000 seconds 00:13:31.794 00:13:31.794 Latency(us) 00:13:31.794 [2024-12-08T20:09:03.772Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:31.794 [2024-12-08T20:09:03.772Z] =================================================================================================================== 00:13:31.794 [2024-12-08T20:09:03.772Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:31.794 [2024-12-08 20:09:03.659935] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:31.794 [2024-12-08 20:09:03.660101] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:31.794 20:09:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 77710 00:13:31.794 [2024-12-08 20:09:03.660208] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:31.794 [2024-12-08 20:09:03.660220] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:13:32.364 [2024-12-08 20:09:04.138455] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:33.308 20:09:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:13:33.308 00:13:33.308 real 0m24.753s 00:13:33.308 user 0m30.183s 00:13:33.308 sys 0m3.538s 00:13:33.308 ************************************ 00:13:33.308 END TEST raid_rebuild_test_sb 00:13:33.308 ************************************ 00:13:33.308 20:09:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:33.308 20:09:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.570 20:09:05 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true true 00:13:33.570 20:09:05 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:13:33.570 20:09:05 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:33.570 20:09:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:33.570 ************************************ 00:13:33.570 START TEST raid_rebuild_test_io 00:13:33.570 ************************************ 00:13:33.570 20:09:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 false true true 00:13:33.570 20:09:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:33.570 20:09:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:13:33.570 20:09:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:13:33.570 20:09:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:13:33.570 20:09:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:33.570 20:09:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:33.570 20:09:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:33.570 20:09:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:33.570 20:09:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:33.570 20:09:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:33.570 20:09:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:33.570 20:09:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:33.570 20:09:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:33.570 20:09:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:13:33.570 20:09:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:33.570 20:09:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:33.570 20:09:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:13:33.570 20:09:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:33.570 20:09:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:33.570 20:09:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:33.570 20:09:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:33.570 20:09:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:33.570 20:09:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:33.570 20:09:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:33.570 20:09:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:33.570 20:09:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:33.570 20:09:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:33.570 20:09:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:33.570 20:09:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:13:33.570 20:09:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=78464 00:13:33.570 20:09:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 78464 00:13:33.570 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:33.570 20:09:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:33.570 20:09:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # '[' -z 78464 ']' 00:13:33.570 20:09:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:33.570 20:09:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:33.570 20:09:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:33.570 20:09:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:33.570 20:09:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:33.570 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:33.570 Zero copy mechanism will not be used. 00:13:33.570 [2024-12-08 20:09:05.402548] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:13:33.570 [2024-12-08 20:09:05.402660] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78464 ] 00:13:33.829 [2024-12-08 20:09:05.576882] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:33.829 [2024-12-08 20:09:05.687798] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:34.089 [2024-12-08 20:09:05.885035] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:34.089 [2024-12-08 20:09:05.885090] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:34.348 20:09:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:34.348 20:09:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # return 0 00:13:34.348 20:09:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:34.348 20:09:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:34.348 20:09:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.348 20:09:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:34.348 BaseBdev1_malloc 00:13:34.348 20:09:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.348 20:09:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:34.348 20:09:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.348 20:09:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:34.348 [2024-12-08 20:09:06.272068] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:34.348 [2024-12-08 20:09:06.272127] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:34.348 [2024-12-08 20:09:06.272150] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:34.348 [2024-12-08 20:09:06.272160] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:34.348 [2024-12-08 20:09:06.274210] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:34.348 [2024-12-08 20:09:06.274251] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:34.348 BaseBdev1 00:13:34.348 20:09:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.348 20:09:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:34.348 20:09:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:34.348 20:09:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.348 20:09:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:34.348 BaseBdev2_malloc 00:13:34.348 20:09:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.348 20:09:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:34.348 20:09:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.348 20:09:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:34.608 [2024-12-08 20:09:06.325846] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:34.608 [2024-12-08 20:09:06.325905] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:34.608 [2024-12-08 20:09:06.325927] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:34.608 [2024-12-08 20:09:06.325937] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:34.608 [2024-12-08 20:09:06.328104] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:34.608 [2024-12-08 20:09:06.328153] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:34.608 BaseBdev2 00:13:34.608 20:09:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.608 20:09:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:34.608 20:09:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:34.608 20:09:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.608 20:09:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:34.608 BaseBdev3_malloc 00:13:34.608 20:09:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.608 20:09:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:13:34.608 20:09:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.608 20:09:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:34.608 [2024-12-08 20:09:06.393547] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:13:34.608 [2024-12-08 20:09:06.393599] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:34.608 [2024-12-08 20:09:06.393637] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:34.608 [2024-12-08 20:09:06.393648] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:34.608 [2024-12-08 20:09:06.395686] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:34.608 [2024-12-08 20:09:06.395725] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:34.608 BaseBdev3 00:13:34.608 20:09:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.608 20:09:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:34.608 20:09:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:34.608 20:09:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.608 20:09:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:34.608 BaseBdev4_malloc 00:13:34.608 20:09:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.608 20:09:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:13:34.608 20:09:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.608 20:09:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:34.608 [2024-12-08 20:09:06.449039] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:13:34.608 [2024-12-08 20:09:06.449096] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:34.608 [2024-12-08 20:09:06.449115] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:34.608 [2024-12-08 20:09:06.449126] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:34.608 [2024-12-08 20:09:06.451180] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:34.608 [2024-12-08 20:09:06.451216] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:34.608 BaseBdev4 00:13:34.608 20:09:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.608 20:09:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:34.608 20:09:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.608 20:09:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:34.608 spare_malloc 00:13:34.608 20:09:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.608 20:09:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:34.608 20:09:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.608 20:09:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:34.608 spare_delay 00:13:34.608 20:09:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.608 20:09:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:34.608 20:09:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.609 20:09:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:34.609 [2024-12-08 20:09:06.515262] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:34.609 [2024-12-08 20:09:06.515377] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:34.609 [2024-12-08 20:09:06.515400] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:13:34.609 [2024-12-08 20:09:06.515411] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:34.609 [2024-12-08 20:09:06.517463] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:34.609 [2024-12-08 20:09:06.517514] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:34.609 spare 00:13:34.609 20:09:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.609 20:09:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:13:34.609 20:09:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.609 20:09:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:34.609 [2024-12-08 20:09:06.527287] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:34.609 [2024-12-08 20:09:06.529083] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:34.609 [2024-12-08 20:09:06.529232] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:34.609 [2024-12-08 20:09:06.529304] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:34.609 [2024-12-08 20:09:06.529393] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:34.609 [2024-12-08 20:09:06.529419] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:13:34.609 [2024-12-08 20:09:06.529667] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:34.609 [2024-12-08 20:09:06.529849] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:34.609 [2024-12-08 20:09:06.529862] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:34.609 [2024-12-08 20:09:06.530017] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:34.609 20:09:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.609 20:09:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:34.609 20:09:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:34.609 20:09:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:34.609 20:09:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:34.609 20:09:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:34.609 20:09:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:34.609 20:09:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:34.609 20:09:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:34.609 20:09:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:34.609 20:09:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:34.609 20:09:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:34.609 20:09:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:34.609 20:09:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.609 20:09:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:34.609 20:09:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.869 20:09:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:34.869 "name": "raid_bdev1", 00:13:34.869 "uuid": "6231a15f-6279-4486-9a71-f98630d96b93", 00:13:34.869 "strip_size_kb": 0, 00:13:34.869 "state": "online", 00:13:34.869 "raid_level": "raid1", 00:13:34.869 "superblock": false, 00:13:34.869 "num_base_bdevs": 4, 00:13:34.869 "num_base_bdevs_discovered": 4, 00:13:34.869 "num_base_bdevs_operational": 4, 00:13:34.869 "base_bdevs_list": [ 00:13:34.869 { 00:13:34.869 "name": "BaseBdev1", 00:13:34.869 "uuid": "fe46deca-f63e-5a97-b558-e28ff71f9a90", 00:13:34.869 "is_configured": true, 00:13:34.869 "data_offset": 0, 00:13:34.869 "data_size": 65536 00:13:34.869 }, 00:13:34.869 { 00:13:34.869 "name": "BaseBdev2", 00:13:34.869 "uuid": "d9b58ff5-693b-5f84-9e14-c4bfe4299540", 00:13:34.869 "is_configured": true, 00:13:34.869 "data_offset": 0, 00:13:34.869 "data_size": 65536 00:13:34.869 }, 00:13:34.869 { 00:13:34.869 "name": "BaseBdev3", 00:13:34.869 "uuid": "ee0f39dd-497a-5376-b3ab-4b57501bbb1b", 00:13:34.869 "is_configured": true, 00:13:34.869 "data_offset": 0, 00:13:34.869 "data_size": 65536 00:13:34.869 }, 00:13:34.869 { 00:13:34.869 "name": "BaseBdev4", 00:13:34.869 "uuid": "396a5eb1-6037-590f-bf35-e3d44d467332", 00:13:34.869 "is_configured": true, 00:13:34.869 "data_offset": 0, 00:13:34.869 "data_size": 65536 00:13:34.869 } 00:13:34.869 ] 00:13:34.869 }' 00:13:34.869 20:09:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:34.869 20:09:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:35.128 20:09:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:35.129 20:09:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:35.129 20:09:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.129 20:09:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:35.129 [2024-12-08 20:09:06.994811] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:35.129 20:09:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.129 20:09:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:13:35.129 20:09:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:35.129 20:09:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.129 20:09:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.129 20:09:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:35.129 20:09:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.129 20:09:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:13:35.129 20:09:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:13:35.129 20:09:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:35.129 20:09:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:35.129 20:09:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.129 20:09:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:35.129 [2024-12-08 20:09:07.054353] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:35.129 20:09:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.129 20:09:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:35.129 20:09:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:35.129 20:09:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:35.129 20:09:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:35.129 20:09:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:35.129 20:09:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:35.129 20:09:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:35.129 20:09:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:35.129 20:09:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:35.129 20:09:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:35.129 20:09:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.129 20:09:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.129 20:09:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:35.129 20:09:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:35.129 20:09:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.388 20:09:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:35.388 "name": "raid_bdev1", 00:13:35.388 "uuid": "6231a15f-6279-4486-9a71-f98630d96b93", 00:13:35.388 "strip_size_kb": 0, 00:13:35.388 "state": "online", 00:13:35.388 "raid_level": "raid1", 00:13:35.388 "superblock": false, 00:13:35.388 "num_base_bdevs": 4, 00:13:35.388 "num_base_bdevs_discovered": 3, 00:13:35.388 "num_base_bdevs_operational": 3, 00:13:35.388 "base_bdevs_list": [ 00:13:35.388 { 00:13:35.388 "name": null, 00:13:35.388 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:35.389 "is_configured": false, 00:13:35.389 "data_offset": 0, 00:13:35.389 "data_size": 65536 00:13:35.389 }, 00:13:35.389 { 00:13:35.389 "name": "BaseBdev2", 00:13:35.389 "uuid": "d9b58ff5-693b-5f84-9e14-c4bfe4299540", 00:13:35.389 "is_configured": true, 00:13:35.389 "data_offset": 0, 00:13:35.389 "data_size": 65536 00:13:35.389 }, 00:13:35.389 { 00:13:35.389 "name": "BaseBdev3", 00:13:35.389 "uuid": "ee0f39dd-497a-5376-b3ab-4b57501bbb1b", 00:13:35.389 "is_configured": true, 00:13:35.389 "data_offset": 0, 00:13:35.389 "data_size": 65536 00:13:35.389 }, 00:13:35.389 { 00:13:35.389 "name": "BaseBdev4", 00:13:35.389 "uuid": "396a5eb1-6037-590f-bf35-e3d44d467332", 00:13:35.389 "is_configured": true, 00:13:35.389 "data_offset": 0, 00:13:35.389 "data_size": 65536 00:13:35.389 } 00:13:35.389 ] 00:13:35.389 }' 00:13:35.389 20:09:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:35.389 20:09:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:35.389 [2024-12-08 20:09:07.149043] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:13:35.389 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:35.389 Zero copy mechanism will not be used. 00:13:35.389 Running I/O for 60 seconds... 00:13:35.649 20:09:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:35.649 20:09:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.649 20:09:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:35.649 [2024-12-08 20:09:07.540136] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:35.649 20:09:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.649 20:09:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:35.649 [2024-12-08 20:09:07.602584] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:13:35.649 [2024-12-08 20:09:07.604604] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:35.908 [2024-12-08 20:09:07.734536] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:35.908 [2024-12-08 20:09:07.735275] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:36.168 [2024-12-08 20:09:07.945715] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:36.168 [2024-12-08 20:09:07.946562] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:36.427 140.00 IOPS, 420.00 MiB/s [2024-12-08T20:09:08.405Z] [2024-12-08 20:09:08.289756] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:36.427 [2024-12-08 20:09:08.291194] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:36.687 [2024-12-08 20:09:08.532387] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:36.687 20:09:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:36.687 20:09:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:36.687 20:09:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:36.687 20:09:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:36.687 20:09:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:36.687 20:09:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:36.687 20:09:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.687 20:09:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:36.687 20:09:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:36.687 20:09:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.687 20:09:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:36.687 "name": "raid_bdev1", 00:13:36.687 "uuid": "6231a15f-6279-4486-9a71-f98630d96b93", 00:13:36.687 "strip_size_kb": 0, 00:13:36.687 "state": "online", 00:13:36.687 "raid_level": "raid1", 00:13:36.687 "superblock": false, 00:13:36.687 "num_base_bdevs": 4, 00:13:36.687 "num_base_bdevs_discovered": 4, 00:13:36.687 "num_base_bdevs_operational": 4, 00:13:36.687 "process": { 00:13:36.687 "type": "rebuild", 00:13:36.687 "target": "spare", 00:13:36.687 "progress": { 00:13:36.687 "blocks": 10240, 00:13:36.687 "percent": 15 00:13:36.687 } 00:13:36.687 }, 00:13:36.687 "base_bdevs_list": [ 00:13:36.687 { 00:13:36.687 "name": "spare", 00:13:36.687 "uuid": "8adc4b58-d8ca-5a77-84aa-46671ab19e2d", 00:13:36.687 "is_configured": true, 00:13:36.687 "data_offset": 0, 00:13:36.687 "data_size": 65536 00:13:36.687 }, 00:13:36.687 { 00:13:36.687 "name": "BaseBdev2", 00:13:36.687 "uuid": "d9b58ff5-693b-5f84-9e14-c4bfe4299540", 00:13:36.687 "is_configured": true, 00:13:36.687 "data_offset": 0, 00:13:36.687 "data_size": 65536 00:13:36.687 }, 00:13:36.687 { 00:13:36.687 "name": "BaseBdev3", 00:13:36.687 "uuid": "ee0f39dd-497a-5376-b3ab-4b57501bbb1b", 00:13:36.687 "is_configured": true, 00:13:36.687 "data_offset": 0, 00:13:36.687 "data_size": 65536 00:13:36.687 }, 00:13:36.687 { 00:13:36.687 "name": "BaseBdev4", 00:13:36.687 "uuid": "396a5eb1-6037-590f-bf35-e3d44d467332", 00:13:36.687 "is_configured": true, 00:13:36.687 "data_offset": 0, 00:13:36.687 "data_size": 65536 00:13:36.687 } 00:13:36.687 ] 00:13:36.687 }' 00:13:36.687 20:09:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:36.950 20:09:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:36.950 20:09:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:36.950 20:09:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:36.950 20:09:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:36.950 20:09:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.950 20:09:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:36.950 [2024-12-08 20:09:08.748947] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:36.950 [2024-12-08 20:09:08.867512] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:36.950 [2024-12-08 20:09:08.872505] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:36.950 [2024-12-08 20:09:08.872555] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:36.950 [2024-12-08 20:09:08.872567] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:36.950 [2024-12-08 20:09:08.907915] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:13:37.220 20:09:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.220 20:09:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:37.220 20:09:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:37.220 20:09:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:37.220 20:09:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:37.220 20:09:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:37.220 20:09:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:37.220 20:09:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:37.220 20:09:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:37.220 20:09:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:37.220 20:09:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:37.220 20:09:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:37.220 20:09:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:37.220 20:09:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.220 20:09:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:37.220 20:09:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.220 20:09:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:37.220 "name": "raid_bdev1", 00:13:37.220 "uuid": "6231a15f-6279-4486-9a71-f98630d96b93", 00:13:37.220 "strip_size_kb": 0, 00:13:37.220 "state": "online", 00:13:37.220 "raid_level": "raid1", 00:13:37.220 "superblock": false, 00:13:37.220 "num_base_bdevs": 4, 00:13:37.220 "num_base_bdevs_discovered": 3, 00:13:37.220 "num_base_bdevs_operational": 3, 00:13:37.220 "base_bdevs_list": [ 00:13:37.220 { 00:13:37.220 "name": null, 00:13:37.220 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:37.220 "is_configured": false, 00:13:37.220 "data_offset": 0, 00:13:37.220 "data_size": 65536 00:13:37.220 }, 00:13:37.220 { 00:13:37.220 "name": "BaseBdev2", 00:13:37.220 "uuid": "d9b58ff5-693b-5f84-9e14-c4bfe4299540", 00:13:37.220 "is_configured": true, 00:13:37.220 "data_offset": 0, 00:13:37.220 "data_size": 65536 00:13:37.220 }, 00:13:37.220 { 00:13:37.220 "name": "BaseBdev3", 00:13:37.220 "uuid": "ee0f39dd-497a-5376-b3ab-4b57501bbb1b", 00:13:37.220 "is_configured": true, 00:13:37.220 "data_offset": 0, 00:13:37.220 "data_size": 65536 00:13:37.220 }, 00:13:37.220 { 00:13:37.220 "name": "BaseBdev4", 00:13:37.220 "uuid": "396a5eb1-6037-590f-bf35-e3d44d467332", 00:13:37.220 "is_configured": true, 00:13:37.220 "data_offset": 0, 00:13:37.220 "data_size": 65536 00:13:37.220 } 00:13:37.220 ] 00:13:37.220 }' 00:13:37.220 20:09:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:37.220 20:09:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:37.497 134.50 IOPS, 403.50 MiB/s [2024-12-08T20:09:09.475Z] 20:09:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:37.497 20:09:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:37.497 20:09:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:37.497 20:09:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:37.497 20:09:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:37.497 20:09:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:37.497 20:09:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.497 20:09:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:37.497 20:09:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:37.497 20:09:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.497 20:09:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:37.497 "name": "raid_bdev1", 00:13:37.497 "uuid": "6231a15f-6279-4486-9a71-f98630d96b93", 00:13:37.497 "strip_size_kb": 0, 00:13:37.497 "state": "online", 00:13:37.497 "raid_level": "raid1", 00:13:37.497 "superblock": false, 00:13:37.497 "num_base_bdevs": 4, 00:13:37.497 "num_base_bdevs_discovered": 3, 00:13:37.497 "num_base_bdevs_operational": 3, 00:13:37.497 "base_bdevs_list": [ 00:13:37.497 { 00:13:37.497 "name": null, 00:13:37.497 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:37.498 "is_configured": false, 00:13:37.498 "data_offset": 0, 00:13:37.498 "data_size": 65536 00:13:37.498 }, 00:13:37.498 { 00:13:37.498 "name": "BaseBdev2", 00:13:37.498 "uuid": "d9b58ff5-693b-5f84-9e14-c4bfe4299540", 00:13:37.498 "is_configured": true, 00:13:37.498 "data_offset": 0, 00:13:37.498 "data_size": 65536 00:13:37.498 }, 00:13:37.498 { 00:13:37.498 "name": "BaseBdev3", 00:13:37.498 "uuid": "ee0f39dd-497a-5376-b3ab-4b57501bbb1b", 00:13:37.498 "is_configured": true, 00:13:37.498 "data_offset": 0, 00:13:37.498 "data_size": 65536 00:13:37.498 }, 00:13:37.498 { 00:13:37.498 "name": "BaseBdev4", 00:13:37.498 "uuid": "396a5eb1-6037-590f-bf35-e3d44d467332", 00:13:37.498 "is_configured": true, 00:13:37.498 "data_offset": 0, 00:13:37.498 "data_size": 65536 00:13:37.498 } 00:13:37.498 ] 00:13:37.498 }' 00:13:37.498 20:09:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:37.758 20:09:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:37.758 20:09:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:37.758 20:09:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:37.758 20:09:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:37.758 20:09:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.758 20:09:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:37.758 [2024-12-08 20:09:09.552178] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:37.758 20:09:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.758 20:09:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:37.758 [2024-12-08 20:09:09.604118] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:13:37.758 [2024-12-08 20:09:09.606022] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:37.758 [2024-12-08 20:09:09.727048] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:37.758 [2024-12-08 20:09:09.728548] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:38.018 [2024-12-08 20:09:09.939378] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:38.018 [2024-12-08 20:09:09.939729] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:38.538 153.00 IOPS, 459.00 MiB/s [2024-12-08T20:09:10.516Z] [2024-12-08 20:09:10.282159] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:38.538 [2024-12-08 20:09:10.282776] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:38.538 [2024-12-08 20:09:10.397636] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:38.538 [2024-12-08 20:09:10.398367] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:38.799 20:09:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:38.799 20:09:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:38.799 20:09:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:38.799 20:09:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:38.799 20:09:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:38.799 20:09:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:38.799 20:09:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.799 20:09:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:38.799 20:09:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:38.799 20:09:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.799 20:09:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:38.799 "name": "raid_bdev1", 00:13:38.799 "uuid": "6231a15f-6279-4486-9a71-f98630d96b93", 00:13:38.799 "strip_size_kb": 0, 00:13:38.799 "state": "online", 00:13:38.799 "raid_level": "raid1", 00:13:38.799 "superblock": false, 00:13:38.799 "num_base_bdevs": 4, 00:13:38.799 "num_base_bdevs_discovered": 4, 00:13:38.799 "num_base_bdevs_operational": 4, 00:13:38.799 "process": { 00:13:38.799 "type": "rebuild", 00:13:38.799 "target": "spare", 00:13:38.799 "progress": { 00:13:38.799 "blocks": 12288, 00:13:38.799 "percent": 18 00:13:38.799 } 00:13:38.799 }, 00:13:38.799 "base_bdevs_list": [ 00:13:38.799 { 00:13:38.799 "name": "spare", 00:13:38.799 "uuid": "8adc4b58-d8ca-5a77-84aa-46671ab19e2d", 00:13:38.799 "is_configured": true, 00:13:38.799 "data_offset": 0, 00:13:38.799 "data_size": 65536 00:13:38.799 }, 00:13:38.799 { 00:13:38.799 "name": "BaseBdev2", 00:13:38.799 "uuid": "d9b58ff5-693b-5f84-9e14-c4bfe4299540", 00:13:38.799 "is_configured": true, 00:13:38.799 "data_offset": 0, 00:13:38.799 "data_size": 65536 00:13:38.799 }, 00:13:38.799 { 00:13:38.799 "name": "BaseBdev3", 00:13:38.799 "uuid": "ee0f39dd-497a-5376-b3ab-4b57501bbb1b", 00:13:38.799 "is_configured": true, 00:13:38.799 "data_offset": 0, 00:13:38.799 "data_size": 65536 00:13:38.799 }, 00:13:38.799 { 00:13:38.799 "name": "BaseBdev4", 00:13:38.799 "uuid": "396a5eb1-6037-590f-bf35-e3d44d467332", 00:13:38.799 "is_configured": true, 00:13:38.799 "data_offset": 0, 00:13:38.799 "data_size": 65536 00:13:38.799 } 00:13:38.799 ] 00:13:38.799 }' 00:13:38.799 20:09:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:38.799 20:09:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:38.799 20:09:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:38.799 [2024-12-08 20:09:10.723608] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:13:38.799 20:09:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:38.799 20:09:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:13:38.799 20:09:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:13:38.799 20:09:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:38.799 20:09:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:13:38.799 20:09:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:38.799 20:09:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.799 20:09:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:38.799 [2024-12-08 20:09:10.755748] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:39.060 [2024-12-08 20:09:10.840941] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:13:39.060 [2024-12-08 20:09:10.863978] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:13:39.060 [2024-12-08 20:09:10.864046] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:13:39.060 [2024-12-08 20:09:10.870970] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:13:39.060 [2024-12-08 20:09:10.876944] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:13:39.060 20:09:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.060 20:09:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:13:39.060 20:09:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:13:39.060 20:09:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:39.060 20:09:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:39.060 20:09:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:39.060 20:09:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:39.060 20:09:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:39.060 20:09:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.060 20:09:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.060 20:09:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:39.060 20:09:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:39.060 20:09:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.060 20:09:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:39.060 "name": "raid_bdev1", 00:13:39.060 "uuid": "6231a15f-6279-4486-9a71-f98630d96b93", 00:13:39.060 "strip_size_kb": 0, 00:13:39.060 "state": "online", 00:13:39.060 "raid_level": "raid1", 00:13:39.060 "superblock": false, 00:13:39.060 "num_base_bdevs": 4, 00:13:39.060 "num_base_bdevs_discovered": 3, 00:13:39.060 "num_base_bdevs_operational": 3, 00:13:39.060 "process": { 00:13:39.060 "type": "rebuild", 00:13:39.060 "target": "spare", 00:13:39.060 "progress": { 00:13:39.060 "blocks": 16384, 00:13:39.060 "percent": 25 00:13:39.060 } 00:13:39.060 }, 00:13:39.060 "base_bdevs_list": [ 00:13:39.060 { 00:13:39.060 "name": "spare", 00:13:39.060 "uuid": "8adc4b58-d8ca-5a77-84aa-46671ab19e2d", 00:13:39.060 "is_configured": true, 00:13:39.060 "data_offset": 0, 00:13:39.060 "data_size": 65536 00:13:39.060 }, 00:13:39.060 { 00:13:39.060 "name": null, 00:13:39.060 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:39.060 "is_configured": false, 00:13:39.060 "data_offset": 0, 00:13:39.060 "data_size": 65536 00:13:39.060 }, 00:13:39.060 { 00:13:39.060 "name": "BaseBdev3", 00:13:39.060 "uuid": "ee0f39dd-497a-5376-b3ab-4b57501bbb1b", 00:13:39.060 "is_configured": true, 00:13:39.060 "data_offset": 0, 00:13:39.060 "data_size": 65536 00:13:39.060 }, 00:13:39.060 { 00:13:39.060 "name": "BaseBdev4", 00:13:39.060 "uuid": "396a5eb1-6037-590f-bf35-e3d44d467332", 00:13:39.060 "is_configured": true, 00:13:39.060 "data_offset": 0, 00:13:39.060 "data_size": 65536 00:13:39.060 } 00:13:39.060 ] 00:13:39.060 }' 00:13:39.060 20:09:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:39.060 20:09:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:39.060 20:09:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:39.060 20:09:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:39.060 20:09:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=472 00:13:39.060 20:09:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:39.060 20:09:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:39.060 20:09:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:39.060 20:09:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:39.060 20:09:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:39.060 20:09:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:39.060 20:09:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.060 20:09:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.060 20:09:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:39.060 20:09:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:39.060 20:09:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.060 20:09:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:39.060 "name": "raid_bdev1", 00:13:39.060 "uuid": "6231a15f-6279-4486-9a71-f98630d96b93", 00:13:39.060 "strip_size_kb": 0, 00:13:39.060 "state": "online", 00:13:39.061 "raid_level": "raid1", 00:13:39.061 "superblock": false, 00:13:39.061 "num_base_bdevs": 4, 00:13:39.061 "num_base_bdevs_discovered": 3, 00:13:39.061 "num_base_bdevs_operational": 3, 00:13:39.061 "process": { 00:13:39.061 "type": "rebuild", 00:13:39.061 "target": "spare", 00:13:39.061 "progress": { 00:13:39.061 "blocks": 16384, 00:13:39.061 "percent": 25 00:13:39.061 } 00:13:39.061 }, 00:13:39.061 "base_bdevs_list": [ 00:13:39.061 { 00:13:39.061 "name": "spare", 00:13:39.061 "uuid": "8adc4b58-d8ca-5a77-84aa-46671ab19e2d", 00:13:39.061 "is_configured": true, 00:13:39.061 "data_offset": 0, 00:13:39.061 "data_size": 65536 00:13:39.061 }, 00:13:39.061 { 00:13:39.061 "name": null, 00:13:39.061 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:39.061 "is_configured": false, 00:13:39.061 "data_offset": 0, 00:13:39.061 "data_size": 65536 00:13:39.061 }, 00:13:39.061 { 00:13:39.061 "name": "BaseBdev3", 00:13:39.061 "uuid": "ee0f39dd-497a-5376-b3ab-4b57501bbb1b", 00:13:39.061 "is_configured": true, 00:13:39.061 "data_offset": 0, 00:13:39.061 "data_size": 65536 00:13:39.061 }, 00:13:39.061 { 00:13:39.061 "name": "BaseBdev4", 00:13:39.061 "uuid": "396a5eb1-6037-590f-bf35-e3d44d467332", 00:13:39.061 "is_configured": true, 00:13:39.061 "data_offset": 0, 00:13:39.061 "data_size": 65536 00:13:39.061 } 00:13:39.061 ] 00:13:39.061 }' 00:13:39.061 20:09:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:39.321 20:09:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:39.321 20:09:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:39.321 20:09:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:39.321 20:09:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:40.260 130.25 IOPS, 390.75 MiB/s [2024-12-08T20:09:12.238Z] 20:09:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:40.260 20:09:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:40.260 20:09:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:40.260 20:09:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:40.260 20:09:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:40.260 20:09:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:40.260 20:09:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:40.260 20:09:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:40.260 20:09:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.260 20:09:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:40.260 20:09:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.260 20:09:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:40.260 "name": "raid_bdev1", 00:13:40.260 "uuid": "6231a15f-6279-4486-9a71-f98630d96b93", 00:13:40.260 "strip_size_kb": 0, 00:13:40.260 "state": "online", 00:13:40.260 "raid_level": "raid1", 00:13:40.260 "superblock": false, 00:13:40.260 "num_base_bdevs": 4, 00:13:40.260 "num_base_bdevs_discovered": 3, 00:13:40.260 "num_base_bdevs_operational": 3, 00:13:40.260 "process": { 00:13:40.260 "type": "rebuild", 00:13:40.260 "target": "spare", 00:13:40.260 "progress": { 00:13:40.260 "blocks": 36864, 00:13:40.260 "percent": 56 00:13:40.260 } 00:13:40.260 }, 00:13:40.260 "base_bdevs_list": [ 00:13:40.260 { 00:13:40.260 "name": "spare", 00:13:40.261 "uuid": "8adc4b58-d8ca-5a77-84aa-46671ab19e2d", 00:13:40.261 "is_configured": true, 00:13:40.261 "data_offset": 0, 00:13:40.261 "data_size": 65536 00:13:40.261 }, 00:13:40.261 { 00:13:40.261 "name": null, 00:13:40.261 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:40.261 "is_configured": false, 00:13:40.261 "data_offset": 0, 00:13:40.261 "data_size": 65536 00:13:40.261 }, 00:13:40.261 { 00:13:40.261 "name": "BaseBdev3", 00:13:40.261 "uuid": "ee0f39dd-497a-5376-b3ab-4b57501bbb1b", 00:13:40.261 "is_configured": true, 00:13:40.261 "data_offset": 0, 00:13:40.261 "data_size": 65536 00:13:40.261 }, 00:13:40.261 { 00:13:40.261 "name": "BaseBdev4", 00:13:40.261 "uuid": "396a5eb1-6037-590f-bf35-e3d44d467332", 00:13:40.261 "is_configured": true, 00:13:40.261 "data_offset": 0, 00:13:40.261 "data_size": 65536 00:13:40.261 } 00:13:40.261 ] 00:13:40.261 }' 00:13:40.261 118.80 IOPS, 356.40 MiB/s [2024-12-08T20:09:12.239Z] 20:09:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:40.261 20:09:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:40.261 20:09:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:40.261 [2024-12-08 20:09:12.219040] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:13:40.261 [2024-12-08 20:09:12.220025] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:13:40.261 20:09:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:40.261 20:09:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:40.520 [2024-12-08 20:09:12.434605] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:13:41.088 [2024-12-08 20:09:12.853754] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:13:41.346 106.67 IOPS, 320.00 MiB/s [2024-12-08T20:09:13.324Z] 20:09:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:41.346 20:09:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:41.346 20:09:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:41.346 20:09:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:41.346 20:09:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:41.346 20:09:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:41.346 20:09:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:41.346 20:09:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:41.346 20:09:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.346 20:09:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:41.346 20:09:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.346 20:09:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:41.346 "name": "raid_bdev1", 00:13:41.346 "uuid": "6231a15f-6279-4486-9a71-f98630d96b93", 00:13:41.346 "strip_size_kb": 0, 00:13:41.346 "state": "online", 00:13:41.346 "raid_level": "raid1", 00:13:41.346 "superblock": false, 00:13:41.346 "num_base_bdevs": 4, 00:13:41.346 "num_base_bdevs_discovered": 3, 00:13:41.346 "num_base_bdevs_operational": 3, 00:13:41.346 "process": { 00:13:41.346 "type": "rebuild", 00:13:41.346 "target": "spare", 00:13:41.346 "progress": { 00:13:41.346 "blocks": 51200, 00:13:41.346 "percent": 78 00:13:41.346 } 00:13:41.346 }, 00:13:41.346 "base_bdevs_list": [ 00:13:41.346 { 00:13:41.346 "name": "spare", 00:13:41.346 "uuid": "8adc4b58-d8ca-5a77-84aa-46671ab19e2d", 00:13:41.346 "is_configured": true, 00:13:41.346 "data_offset": 0, 00:13:41.346 "data_size": 65536 00:13:41.346 }, 00:13:41.346 { 00:13:41.346 "name": null, 00:13:41.346 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:41.346 "is_configured": false, 00:13:41.346 "data_offset": 0, 00:13:41.346 "data_size": 65536 00:13:41.346 }, 00:13:41.346 { 00:13:41.346 "name": "BaseBdev3", 00:13:41.346 "uuid": "ee0f39dd-497a-5376-b3ab-4b57501bbb1b", 00:13:41.346 "is_configured": true, 00:13:41.346 "data_offset": 0, 00:13:41.346 "data_size": 65536 00:13:41.346 }, 00:13:41.346 { 00:13:41.346 "name": "BaseBdev4", 00:13:41.346 "uuid": "396a5eb1-6037-590f-bf35-e3d44d467332", 00:13:41.347 "is_configured": true, 00:13:41.347 "data_offset": 0, 00:13:41.347 "data_size": 65536 00:13:41.347 } 00:13:41.347 ] 00:13:41.347 }' 00:13:41.347 [2024-12-08 20:09:13.291400] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:13:41.347 20:09:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:41.605 20:09:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:41.605 20:09:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:41.605 20:09:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:41.605 20:09:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:42.173 [2024-12-08 20:09:13.997310] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:42.173 [2024-12-08 20:09:14.051443] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:42.173 [2024-12-08 20:09:14.053852] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:42.432 96.57 IOPS, 289.71 MiB/s [2024-12-08T20:09:14.410Z] 20:09:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:42.432 20:09:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:42.432 20:09:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:42.432 20:09:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:42.432 20:09:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:42.432 20:09:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:42.432 20:09:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:42.432 20:09:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:42.432 20:09:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.432 20:09:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:42.432 20:09:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.691 20:09:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:42.691 "name": "raid_bdev1", 00:13:42.691 "uuid": "6231a15f-6279-4486-9a71-f98630d96b93", 00:13:42.691 "strip_size_kb": 0, 00:13:42.691 "state": "online", 00:13:42.691 "raid_level": "raid1", 00:13:42.691 "superblock": false, 00:13:42.691 "num_base_bdevs": 4, 00:13:42.691 "num_base_bdevs_discovered": 3, 00:13:42.691 "num_base_bdevs_operational": 3, 00:13:42.691 "base_bdevs_list": [ 00:13:42.691 { 00:13:42.691 "name": "spare", 00:13:42.691 "uuid": "8adc4b58-d8ca-5a77-84aa-46671ab19e2d", 00:13:42.691 "is_configured": true, 00:13:42.692 "data_offset": 0, 00:13:42.692 "data_size": 65536 00:13:42.692 }, 00:13:42.692 { 00:13:42.692 "name": null, 00:13:42.692 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:42.692 "is_configured": false, 00:13:42.692 "data_offset": 0, 00:13:42.692 "data_size": 65536 00:13:42.692 }, 00:13:42.692 { 00:13:42.692 "name": "BaseBdev3", 00:13:42.692 "uuid": "ee0f39dd-497a-5376-b3ab-4b57501bbb1b", 00:13:42.692 "is_configured": true, 00:13:42.692 "data_offset": 0, 00:13:42.692 "data_size": 65536 00:13:42.692 }, 00:13:42.692 { 00:13:42.692 "name": "BaseBdev4", 00:13:42.692 "uuid": "396a5eb1-6037-590f-bf35-e3d44d467332", 00:13:42.692 "is_configured": true, 00:13:42.692 "data_offset": 0, 00:13:42.692 "data_size": 65536 00:13:42.692 } 00:13:42.692 ] 00:13:42.692 }' 00:13:42.692 20:09:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:42.692 20:09:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:42.692 20:09:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:42.692 20:09:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:42.692 20:09:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:13:42.692 20:09:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:42.692 20:09:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:42.692 20:09:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:42.692 20:09:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:42.692 20:09:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:42.692 20:09:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:42.692 20:09:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:42.692 20:09:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.692 20:09:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:42.692 20:09:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.692 20:09:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:42.692 "name": "raid_bdev1", 00:13:42.692 "uuid": "6231a15f-6279-4486-9a71-f98630d96b93", 00:13:42.692 "strip_size_kb": 0, 00:13:42.692 "state": "online", 00:13:42.692 "raid_level": "raid1", 00:13:42.692 "superblock": false, 00:13:42.692 "num_base_bdevs": 4, 00:13:42.692 "num_base_bdevs_discovered": 3, 00:13:42.692 "num_base_bdevs_operational": 3, 00:13:42.692 "base_bdevs_list": [ 00:13:42.692 { 00:13:42.692 "name": "spare", 00:13:42.692 "uuid": "8adc4b58-d8ca-5a77-84aa-46671ab19e2d", 00:13:42.692 "is_configured": true, 00:13:42.692 "data_offset": 0, 00:13:42.692 "data_size": 65536 00:13:42.692 }, 00:13:42.692 { 00:13:42.692 "name": null, 00:13:42.692 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:42.692 "is_configured": false, 00:13:42.692 "data_offset": 0, 00:13:42.692 "data_size": 65536 00:13:42.692 }, 00:13:42.692 { 00:13:42.692 "name": "BaseBdev3", 00:13:42.692 "uuid": "ee0f39dd-497a-5376-b3ab-4b57501bbb1b", 00:13:42.692 "is_configured": true, 00:13:42.692 "data_offset": 0, 00:13:42.692 "data_size": 65536 00:13:42.692 }, 00:13:42.692 { 00:13:42.692 "name": "BaseBdev4", 00:13:42.692 "uuid": "396a5eb1-6037-590f-bf35-e3d44d467332", 00:13:42.692 "is_configured": true, 00:13:42.692 "data_offset": 0, 00:13:42.692 "data_size": 65536 00:13:42.692 } 00:13:42.692 ] 00:13:42.692 }' 00:13:42.692 20:09:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:42.692 20:09:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:42.692 20:09:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:42.692 20:09:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:42.692 20:09:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:42.692 20:09:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:42.692 20:09:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:42.692 20:09:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:42.692 20:09:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:42.692 20:09:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:42.692 20:09:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:42.692 20:09:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:42.692 20:09:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:42.692 20:09:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:42.692 20:09:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:42.692 20:09:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:42.692 20:09:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.692 20:09:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:42.692 20:09:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.692 20:09:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:42.692 "name": "raid_bdev1", 00:13:42.692 "uuid": "6231a15f-6279-4486-9a71-f98630d96b93", 00:13:42.692 "strip_size_kb": 0, 00:13:42.692 "state": "online", 00:13:42.692 "raid_level": "raid1", 00:13:42.692 "superblock": false, 00:13:42.692 "num_base_bdevs": 4, 00:13:42.692 "num_base_bdevs_discovered": 3, 00:13:42.692 "num_base_bdevs_operational": 3, 00:13:42.692 "base_bdevs_list": [ 00:13:42.692 { 00:13:42.692 "name": "spare", 00:13:42.692 "uuid": "8adc4b58-d8ca-5a77-84aa-46671ab19e2d", 00:13:42.692 "is_configured": true, 00:13:42.692 "data_offset": 0, 00:13:42.692 "data_size": 65536 00:13:42.692 }, 00:13:42.692 { 00:13:42.692 "name": null, 00:13:42.692 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:42.692 "is_configured": false, 00:13:42.692 "data_offset": 0, 00:13:42.692 "data_size": 65536 00:13:42.692 }, 00:13:42.692 { 00:13:42.692 "name": "BaseBdev3", 00:13:42.692 "uuid": "ee0f39dd-497a-5376-b3ab-4b57501bbb1b", 00:13:42.692 "is_configured": true, 00:13:42.692 "data_offset": 0, 00:13:42.692 "data_size": 65536 00:13:42.692 }, 00:13:42.692 { 00:13:42.692 "name": "BaseBdev4", 00:13:42.692 "uuid": "396a5eb1-6037-590f-bf35-e3d44d467332", 00:13:42.692 "is_configured": true, 00:13:42.692 "data_offset": 0, 00:13:42.692 "data_size": 65536 00:13:42.692 } 00:13:42.692 ] 00:13:42.692 }' 00:13:42.692 20:09:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:42.692 20:09:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:43.263 20:09:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:43.263 20:09:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.263 20:09:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:43.263 [2024-12-08 20:09:15.011852] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:43.263 [2024-12-08 20:09:15.011885] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:43.263 00:13:43.263 Latency(us) 00:13:43.263 [2024-12-08T20:09:15.241Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:43.263 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:13:43.263 raid_bdev1 : 7.90 89.58 268.73 0.00 0.00 15960.71 302.28 117220.72 00:13:43.263 [2024-12-08T20:09:15.241Z] =================================================================================================================== 00:13:43.263 [2024-12-08T20:09:15.241Z] Total : 89.58 268.73 0.00 0.00 15960.71 302.28 117220.72 00:13:43.263 [2024-12-08 20:09:15.060058] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:43.263 [2024-12-08 20:09:15.060158] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:43.263 [2024-12-08 20:09:15.060288] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:43.263 [2024-12-08 20:09:15.060357] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:43.263 { 00:13:43.263 "results": [ 00:13:43.263 { 00:13:43.263 "job": "raid_bdev1", 00:13:43.263 "core_mask": "0x1", 00:13:43.263 "workload": "randrw", 00:13:43.263 "percentage": 50, 00:13:43.263 "status": "finished", 00:13:43.263 "queue_depth": 2, 00:13:43.263 "io_size": 3145728, 00:13:43.263 "runtime": 7.903886, 00:13:43.263 "iops": 89.57619074971475, 00:13:43.263 "mibps": 268.72857224914424, 00:13:43.263 "io_failed": 0, 00:13:43.263 "io_timeout": 0, 00:13:43.263 "avg_latency_us": 15960.705992647967, 00:13:43.263 "min_latency_us": 302.2812227074236, 00:13:43.263 "max_latency_us": 117220.7231441048 00:13:43.263 } 00:13:43.263 ], 00:13:43.263 "core_count": 1 00:13:43.263 } 00:13:43.263 20:09:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.263 20:09:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.264 20:09:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.264 20:09:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:43.264 20:09:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:13:43.264 20:09:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.264 20:09:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:43.264 20:09:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:43.264 20:09:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:13:43.264 20:09:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:13:43.264 20:09:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:43.264 20:09:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:13:43.264 20:09:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:43.264 20:09:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:43.264 20:09:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:43.264 20:09:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:13:43.264 20:09:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:43.264 20:09:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:43.264 20:09:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:13:43.522 /dev/nbd0 00:13:43.522 20:09:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:43.522 20:09:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:43.522 20:09:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:43.522 20:09:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:13:43.522 20:09:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:43.522 20:09:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:43.522 20:09:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:43.522 20:09:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:13:43.522 20:09:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:43.522 20:09:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:43.522 20:09:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:43.522 1+0 records in 00:13:43.522 1+0 records out 00:13:43.522 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000581244 s, 7.0 MB/s 00:13:43.522 20:09:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:43.522 20:09:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:13:43.522 20:09:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:43.522 20:09:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:43.522 20:09:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:13:43.522 20:09:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:43.522 20:09:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:43.522 20:09:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:13:43.522 20:09:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:13:43.522 20:09:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@728 -- # continue 00:13:43.522 20:09:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:13:43.522 20:09:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:13:43.522 20:09:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:13:43.522 20:09:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:43.522 20:09:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:13:43.522 20:09:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:43.522 20:09:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:13:43.522 20:09:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:43.522 20:09:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:13:43.522 20:09:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:43.522 20:09:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:43.522 20:09:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:13:43.781 /dev/nbd1 00:13:43.781 20:09:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:43.781 20:09:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:43.781 20:09:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:43.781 20:09:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:13:43.781 20:09:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:43.781 20:09:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:43.781 20:09:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:43.781 20:09:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:13:43.781 20:09:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:43.781 20:09:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:43.781 20:09:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:43.781 1+0 records in 00:13:43.781 1+0 records out 00:13:43.781 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000288773 s, 14.2 MB/s 00:13:43.781 20:09:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:43.781 20:09:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:13:43.781 20:09:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:43.781 20:09:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:43.781 20:09:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:13:43.781 20:09:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:43.781 20:09:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:43.781 20:09:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:13:44.040 20:09:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:13:44.040 20:09:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:44.040 20:09:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:13:44.040 20:09:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:44.040 20:09:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:13:44.040 20:09:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:44.040 20:09:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:44.040 20:09:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:44.040 20:09:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:44.040 20:09:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:44.040 20:09:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:44.040 20:09:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:44.040 20:09:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:44.040 20:09:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:13:44.040 20:09:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:44.040 20:09:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:13:44.040 20:09:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:13:44.040 20:09:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:13:44.040 20:09:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:44.040 20:09:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:13:44.040 20:09:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:44.040 20:09:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:13:44.040 20:09:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:44.040 20:09:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:13:44.040 20:09:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:44.040 20:09:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:44.040 20:09:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:13:44.299 /dev/nbd1 00:13:44.299 20:09:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:44.299 20:09:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:44.299 20:09:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:44.299 20:09:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:13:44.299 20:09:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:44.299 20:09:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:44.299 20:09:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:44.299 20:09:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:13:44.299 20:09:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:44.299 20:09:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:44.299 20:09:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:44.299 1+0 records in 00:13:44.299 1+0 records out 00:13:44.299 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000318955 s, 12.8 MB/s 00:13:44.299 20:09:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:44.299 20:09:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:13:44.299 20:09:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:44.299 20:09:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:44.299 20:09:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:13:44.299 20:09:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:44.299 20:09:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:44.299 20:09:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:13:44.558 20:09:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:13:44.558 20:09:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:44.558 20:09:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:13:44.558 20:09:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:44.558 20:09:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:13:44.558 20:09:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:44.558 20:09:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:44.558 20:09:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:44.558 20:09:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:44.558 20:09:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:44.558 20:09:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:44.558 20:09:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:44.558 20:09:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:44.558 20:09:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:13:44.559 20:09:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:44.559 20:09:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:44.559 20:09:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:44.559 20:09:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:44.559 20:09:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:44.559 20:09:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:13:44.559 20:09:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:44.559 20:09:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:44.818 20:09:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:44.818 20:09:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:44.818 20:09:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:44.818 20:09:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:44.818 20:09:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:44.818 20:09:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:44.818 20:09:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:13:44.818 20:09:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:44.818 20:09:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:13:44.818 20:09:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 78464 00:13:44.818 20:09:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # '[' -z 78464 ']' 00:13:44.818 20:09:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # kill -0 78464 00:13:44.818 20:09:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # uname 00:13:44.818 20:09:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:44.818 20:09:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78464 00:13:44.818 20:09:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:44.818 20:09:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:44.818 killing process with pid 78464 00:13:44.818 Received shutdown signal, test time was about 9.641299 seconds 00:13:44.818 00:13:44.818 Latency(us) 00:13:44.818 [2024-12-08T20:09:16.796Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:44.818 [2024-12-08T20:09:16.796Z] =================================================================================================================== 00:13:44.818 [2024-12-08T20:09:16.796Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:44.818 20:09:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78464' 00:13:44.818 20:09:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@973 -- # kill 78464 00:13:44.818 [2024-12-08 20:09:16.773793] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:44.818 20:09:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@978 -- # wait 78464 00:13:45.388 [2024-12-08 20:09:17.172629] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:46.328 20:09:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:13:46.328 00:13:46.328 real 0m12.996s 00:13:46.328 user 0m16.348s 00:13:46.328 sys 0m1.737s 00:13:46.328 20:09:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:46.328 20:09:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:46.328 ************************************ 00:13:46.328 END TEST raid_rebuild_test_io 00:13:46.328 ************************************ 00:13:46.588 20:09:18 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true true 00:13:46.588 20:09:18 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:13:46.588 20:09:18 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:46.588 20:09:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:46.588 ************************************ 00:13:46.588 START TEST raid_rebuild_test_sb_io 00:13:46.588 ************************************ 00:13:46.588 20:09:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 true true true 00:13:46.588 20:09:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:46.588 20:09:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:13:46.588 20:09:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:13:46.588 20:09:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:13:46.588 20:09:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:46.588 20:09:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:46.588 20:09:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:46.588 20:09:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:46.588 20:09:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:46.588 20:09:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:46.588 20:09:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:46.588 20:09:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:46.588 20:09:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:46.588 20:09:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:13:46.588 20:09:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:46.588 20:09:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:46.588 20:09:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:13:46.588 20:09:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:46.588 20:09:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:46.588 20:09:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:46.588 20:09:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:46.588 20:09:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:46.588 20:09:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:46.588 20:09:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:46.588 20:09:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:46.588 20:09:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:46.588 20:09:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:46.588 20:09:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:46.588 20:09:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:13:46.588 20:09:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:13:46.588 20:09:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=78868 00:13:46.588 20:09:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:46.588 20:09:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 78868 00:13:46.588 20:09:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # '[' -z 78868 ']' 00:13:46.588 20:09:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:46.588 20:09:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:46.589 20:09:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:46.589 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:46.589 20:09:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:46.589 20:09:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:46.589 [2024-12-08 20:09:18.468434] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:13:46.589 [2024-12-08 20:09:18.468656] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78868 ] 00:13:46.589 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:46.589 Zero copy mechanism will not be used. 00:13:46.849 [2024-12-08 20:09:18.639664] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:46.849 [2024-12-08 20:09:18.750773] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:47.108 [2024-12-08 20:09:18.945785] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:47.108 [2024-12-08 20:09:18.945907] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:47.367 20:09:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:47.367 20:09:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # return 0 00:13:47.367 20:09:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:47.367 20:09:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:47.367 20:09:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.367 20:09:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:47.367 BaseBdev1_malloc 00:13:47.367 20:09:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.367 20:09:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:47.367 20:09:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.368 20:09:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:47.368 [2024-12-08 20:09:19.332825] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:47.368 [2024-12-08 20:09:19.332885] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:47.368 [2024-12-08 20:09:19.332910] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:47.368 [2024-12-08 20:09:19.332921] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:47.368 [2024-12-08 20:09:19.334938] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:47.368 [2024-12-08 20:09:19.334991] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:47.368 BaseBdev1 00:13:47.368 20:09:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.368 20:09:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:47.368 20:09:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:47.368 20:09:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.368 20:09:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:47.627 BaseBdev2_malloc 00:13:47.627 20:09:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.627 20:09:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:47.627 20:09:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.627 20:09:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:47.627 [2024-12-08 20:09:19.386170] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:47.627 [2024-12-08 20:09:19.386226] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:47.627 [2024-12-08 20:09:19.386249] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:47.627 [2024-12-08 20:09:19.386261] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:47.627 [2024-12-08 20:09:19.388302] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:47.627 [2024-12-08 20:09:19.388352] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:47.627 BaseBdev2 00:13:47.627 20:09:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.627 20:09:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:47.627 20:09:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:47.627 20:09:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.628 20:09:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:47.628 BaseBdev3_malloc 00:13:47.628 20:09:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.628 20:09:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:13:47.628 20:09:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.628 20:09:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:47.628 [2024-12-08 20:09:19.460784] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:13:47.628 [2024-12-08 20:09:19.460832] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:47.628 [2024-12-08 20:09:19.460856] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:47.628 [2024-12-08 20:09:19.460867] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:47.628 [2024-12-08 20:09:19.462837] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:47.628 [2024-12-08 20:09:19.462874] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:47.628 BaseBdev3 00:13:47.628 20:09:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.628 20:09:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:47.628 20:09:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:47.628 20:09:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.628 20:09:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:47.628 BaseBdev4_malloc 00:13:47.628 20:09:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.628 20:09:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:13:47.628 20:09:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.628 20:09:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:47.628 [2024-12-08 20:09:19.513833] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:13:47.628 [2024-12-08 20:09:19.513888] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:47.628 [2024-12-08 20:09:19.513908] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:47.628 [2024-12-08 20:09:19.513918] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:47.628 [2024-12-08 20:09:19.515882] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:47.628 [2024-12-08 20:09:19.515921] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:47.628 BaseBdev4 00:13:47.628 20:09:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.628 20:09:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:47.628 20:09:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.628 20:09:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:47.628 spare_malloc 00:13:47.628 20:09:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.628 20:09:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:47.628 20:09:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.628 20:09:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:47.628 spare_delay 00:13:47.628 20:09:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.628 20:09:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:47.628 20:09:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.628 20:09:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:47.628 [2024-12-08 20:09:19.577764] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:47.628 [2024-12-08 20:09:19.577811] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:47.628 [2024-12-08 20:09:19.577827] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:13:47.628 [2024-12-08 20:09:19.577837] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:47.628 [2024-12-08 20:09:19.579817] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:47.628 [2024-12-08 20:09:19.579906] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:47.628 spare 00:13:47.628 20:09:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.628 20:09:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:13:47.628 20:09:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.628 20:09:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:47.628 [2024-12-08 20:09:19.589795] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:47.628 [2024-12-08 20:09:19.591549] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:47.628 [2024-12-08 20:09:19.591611] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:47.628 [2024-12-08 20:09:19.591659] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:47.628 [2024-12-08 20:09:19.591827] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:47.628 [2024-12-08 20:09:19.591842] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:47.628 [2024-12-08 20:09:19.592079] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:47.628 [2024-12-08 20:09:19.592248] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:47.628 [2024-12-08 20:09:19.592264] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:47.628 [2024-12-08 20:09:19.592418] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:47.628 20:09:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.628 20:09:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:47.628 20:09:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:47.628 20:09:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:47.628 20:09:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:47.628 20:09:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:47.628 20:09:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:47.628 20:09:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:47.628 20:09:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:47.628 20:09:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:47.628 20:09:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:47.628 20:09:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.628 20:09:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:47.628 20:09:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.628 20:09:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:47.888 20:09:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.888 20:09:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:47.888 "name": "raid_bdev1", 00:13:47.888 "uuid": "87d92a2c-ac83-4f88-9366-ecd7981d9127", 00:13:47.888 "strip_size_kb": 0, 00:13:47.888 "state": "online", 00:13:47.888 "raid_level": "raid1", 00:13:47.888 "superblock": true, 00:13:47.888 "num_base_bdevs": 4, 00:13:47.888 "num_base_bdevs_discovered": 4, 00:13:47.888 "num_base_bdevs_operational": 4, 00:13:47.888 "base_bdevs_list": [ 00:13:47.888 { 00:13:47.888 "name": "BaseBdev1", 00:13:47.888 "uuid": "aa4d5178-83d0-54ff-93ed-d3c9d9cbab87", 00:13:47.888 "is_configured": true, 00:13:47.888 "data_offset": 2048, 00:13:47.888 "data_size": 63488 00:13:47.888 }, 00:13:47.888 { 00:13:47.888 "name": "BaseBdev2", 00:13:47.888 "uuid": "623b99b2-6c39-5bb5-b4b3-d316c5629091", 00:13:47.888 "is_configured": true, 00:13:47.888 "data_offset": 2048, 00:13:47.888 "data_size": 63488 00:13:47.888 }, 00:13:47.888 { 00:13:47.888 "name": "BaseBdev3", 00:13:47.888 "uuid": "8eb38d9c-af97-5b7f-86d5-d0c0b91998b4", 00:13:47.888 "is_configured": true, 00:13:47.888 "data_offset": 2048, 00:13:47.888 "data_size": 63488 00:13:47.888 }, 00:13:47.888 { 00:13:47.888 "name": "BaseBdev4", 00:13:47.888 "uuid": "9e9f6747-17e5-50e0-8588-7672e13c9f8c", 00:13:47.888 "is_configured": true, 00:13:47.888 "data_offset": 2048, 00:13:47.888 "data_size": 63488 00:13:47.888 } 00:13:47.888 ] 00:13:47.888 }' 00:13:47.888 20:09:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:47.888 20:09:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:48.148 20:09:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:48.148 20:09:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:48.148 20:09:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.148 20:09:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:48.148 [2024-12-08 20:09:20.041366] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:48.148 20:09:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.148 20:09:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:13:48.148 20:09:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:48.148 20:09:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:48.148 20:09:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.148 20:09:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:48.148 20:09:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.148 20:09:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:13:48.148 20:09:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:13:48.148 20:09:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:48.148 20:09:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:48.148 20:09:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.148 20:09:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:48.407 [2024-12-08 20:09:20.124844] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:48.408 20:09:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.408 20:09:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:48.408 20:09:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:48.408 20:09:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:48.408 20:09:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:48.408 20:09:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:48.408 20:09:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:48.408 20:09:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:48.408 20:09:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:48.408 20:09:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:48.408 20:09:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:48.408 20:09:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:48.408 20:09:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.408 20:09:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:48.408 20:09:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:48.408 20:09:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.408 20:09:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:48.408 "name": "raid_bdev1", 00:13:48.408 "uuid": "87d92a2c-ac83-4f88-9366-ecd7981d9127", 00:13:48.408 "strip_size_kb": 0, 00:13:48.408 "state": "online", 00:13:48.408 "raid_level": "raid1", 00:13:48.408 "superblock": true, 00:13:48.408 "num_base_bdevs": 4, 00:13:48.408 "num_base_bdevs_discovered": 3, 00:13:48.408 "num_base_bdevs_operational": 3, 00:13:48.408 "base_bdevs_list": [ 00:13:48.408 { 00:13:48.408 "name": null, 00:13:48.408 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:48.408 "is_configured": false, 00:13:48.408 "data_offset": 0, 00:13:48.408 "data_size": 63488 00:13:48.408 }, 00:13:48.408 { 00:13:48.408 "name": "BaseBdev2", 00:13:48.408 "uuid": "623b99b2-6c39-5bb5-b4b3-d316c5629091", 00:13:48.408 "is_configured": true, 00:13:48.408 "data_offset": 2048, 00:13:48.408 "data_size": 63488 00:13:48.408 }, 00:13:48.408 { 00:13:48.408 "name": "BaseBdev3", 00:13:48.408 "uuid": "8eb38d9c-af97-5b7f-86d5-d0c0b91998b4", 00:13:48.408 "is_configured": true, 00:13:48.408 "data_offset": 2048, 00:13:48.408 "data_size": 63488 00:13:48.408 }, 00:13:48.408 { 00:13:48.408 "name": "BaseBdev4", 00:13:48.408 "uuid": "9e9f6747-17e5-50e0-8588-7672e13c9f8c", 00:13:48.408 "is_configured": true, 00:13:48.408 "data_offset": 2048, 00:13:48.408 "data_size": 63488 00:13:48.408 } 00:13:48.408 ] 00:13:48.408 }' 00:13:48.408 20:09:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:48.408 20:09:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:48.408 [2024-12-08 20:09:20.211625] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:13:48.408 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:48.408 Zero copy mechanism will not be used. 00:13:48.408 Running I/O for 60 seconds... 00:13:48.667 20:09:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:48.668 20:09:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.668 20:09:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:48.668 [2024-12-08 20:09:20.537574] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:48.668 20:09:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.668 20:09:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:48.668 [2024-12-08 20:09:20.609605] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:13:48.668 [2024-12-08 20:09:20.611607] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:48.928 [2024-12-08 20:09:20.732745] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:49.188 [2024-12-08 20:09:20.956390] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:49.188 [2024-12-08 20:09:20.957206] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:49.447 168.00 IOPS, 504.00 MiB/s [2024-12-08T20:09:21.425Z] [2024-12-08 20:09:21.307011] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:49.705 20:09:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:49.705 20:09:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:49.705 20:09:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:49.705 20:09:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:49.705 20:09:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:49.705 20:09:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:49.705 20:09:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:49.705 20:09:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.705 20:09:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:49.705 20:09:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.705 20:09:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:49.705 "name": "raid_bdev1", 00:13:49.705 "uuid": "87d92a2c-ac83-4f88-9366-ecd7981d9127", 00:13:49.705 "strip_size_kb": 0, 00:13:49.705 "state": "online", 00:13:49.705 "raid_level": "raid1", 00:13:49.705 "superblock": true, 00:13:49.705 "num_base_bdevs": 4, 00:13:49.705 "num_base_bdevs_discovered": 4, 00:13:49.705 "num_base_bdevs_operational": 4, 00:13:49.705 "process": { 00:13:49.705 "type": "rebuild", 00:13:49.705 "target": "spare", 00:13:49.705 "progress": { 00:13:49.705 "blocks": 12288, 00:13:49.706 "percent": 19 00:13:49.706 } 00:13:49.706 }, 00:13:49.706 "base_bdevs_list": [ 00:13:49.706 { 00:13:49.706 "name": "spare", 00:13:49.706 "uuid": "4dedc060-e192-5de4-9085-ba4de69307d6", 00:13:49.706 "is_configured": true, 00:13:49.706 "data_offset": 2048, 00:13:49.706 "data_size": 63488 00:13:49.706 }, 00:13:49.706 { 00:13:49.706 "name": "BaseBdev2", 00:13:49.706 "uuid": "623b99b2-6c39-5bb5-b4b3-d316c5629091", 00:13:49.706 "is_configured": true, 00:13:49.706 "data_offset": 2048, 00:13:49.706 "data_size": 63488 00:13:49.706 }, 00:13:49.706 { 00:13:49.706 "name": "BaseBdev3", 00:13:49.706 "uuid": "8eb38d9c-af97-5b7f-86d5-d0c0b91998b4", 00:13:49.706 "is_configured": true, 00:13:49.706 "data_offset": 2048, 00:13:49.706 "data_size": 63488 00:13:49.706 }, 00:13:49.706 { 00:13:49.706 "name": "BaseBdev4", 00:13:49.706 "uuid": "9e9f6747-17e5-50e0-8588-7672e13c9f8c", 00:13:49.706 "is_configured": true, 00:13:49.706 "data_offset": 2048, 00:13:49.706 "data_size": 63488 00:13:49.706 } 00:13:49.706 ] 00:13:49.706 }' 00:13:49.706 20:09:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:49.965 20:09:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:49.965 20:09:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:49.965 [2024-12-08 20:09:21.705091] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:13:49.965 [2024-12-08 20:09:21.706454] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:13:49.965 20:09:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:49.965 20:09:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:49.965 20:09:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.965 20:09:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:49.965 [2024-12-08 20:09:21.744863] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:49.965 [2024-12-08 20:09:21.917631] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:49.965 [2024-12-08 20:09:21.921021] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:49.965 [2024-12-08 20:09:21.921058] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:49.965 [2024-12-08 20:09:21.921071] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:50.224 [2024-12-08 20:09:21.948794] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:13:50.224 20:09:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.224 20:09:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:50.224 20:09:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:50.224 20:09:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:50.224 20:09:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:50.224 20:09:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:50.224 20:09:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:50.224 20:09:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:50.224 20:09:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:50.224 20:09:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:50.224 20:09:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:50.224 20:09:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:50.224 20:09:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:50.224 20:09:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.224 20:09:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:50.224 20:09:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.224 20:09:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:50.224 "name": "raid_bdev1", 00:13:50.224 "uuid": "87d92a2c-ac83-4f88-9366-ecd7981d9127", 00:13:50.224 "strip_size_kb": 0, 00:13:50.224 "state": "online", 00:13:50.224 "raid_level": "raid1", 00:13:50.224 "superblock": true, 00:13:50.224 "num_base_bdevs": 4, 00:13:50.224 "num_base_bdevs_discovered": 3, 00:13:50.224 "num_base_bdevs_operational": 3, 00:13:50.224 "base_bdevs_list": [ 00:13:50.224 { 00:13:50.224 "name": null, 00:13:50.224 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:50.224 "is_configured": false, 00:13:50.224 "data_offset": 0, 00:13:50.224 "data_size": 63488 00:13:50.224 }, 00:13:50.224 { 00:13:50.224 "name": "BaseBdev2", 00:13:50.224 "uuid": "623b99b2-6c39-5bb5-b4b3-d316c5629091", 00:13:50.224 "is_configured": true, 00:13:50.224 "data_offset": 2048, 00:13:50.224 "data_size": 63488 00:13:50.224 }, 00:13:50.224 { 00:13:50.224 "name": "BaseBdev3", 00:13:50.224 "uuid": "8eb38d9c-af97-5b7f-86d5-d0c0b91998b4", 00:13:50.224 "is_configured": true, 00:13:50.224 "data_offset": 2048, 00:13:50.224 "data_size": 63488 00:13:50.224 }, 00:13:50.224 { 00:13:50.224 "name": "BaseBdev4", 00:13:50.224 "uuid": "9e9f6747-17e5-50e0-8588-7672e13c9f8c", 00:13:50.224 "is_configured": true, 00:13:50.224 "data_offset": 2048, 00:13:50.224 "data_size": 63488 00:13:50.224 } 00:13:50.224 ] 00:13:50.224 }' 00:13:50.224 20:09:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:50.224 20:09:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:50.744 142.50 IOPS, 427.50 MiB/s [2024-12-08T20:09:22.722Z] 20:09:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:50.744 20:09:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:50.744 20:09:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:50.744 20:09:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:50.744 20:09:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:50.744 20:09:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:50.744 20:09:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:50.744 20:09:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.744 20:09:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:50.744 20:09:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.744 20:09:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:50.744 "name": "raid_bdev1", 00:13:50.744 "uuid": "87d92a2c-ac83-4f88-9366-ecd7981d9127", 00:13:50.744 "strip_size_kb": 0, 00:13:50.744 "state": "online", 00:13:50.744 "raid_level": "raid1", 00:13:50.744 "superblock": true, 00:13:50.744 "num_base_bdevs": 4, 00:13:50.744 "num_base_bdevs_discovered": 3, 00:13:50.744 "num_base_bdevs_operational": 3, 00:13:50.744 "base_bdevs_list": [ 00:13:50.744 { 00:13:50.744 "name": null, 00:13:50.744 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:50.744 "is_configured": false, 00:13:50.744 "data_offset": 0, 00:13:50.744 "data_size": 63488 00:13:50.744 }, 00:13:50.744 { 00:13:50.744 "name": "BaseBdev2", 00:13:50.744 "uuid": "623b99b2-6c39-5bb5-b4b3-d316c5629091", 00:13:50.744 "is_configured": true, 00:13:50.744 "data_offset": 2048, 00:13:50.744 "data_size": 63488 00:13:50.744 }, 00:13:50.744 { 00:13:50.744 "name": "BaseBdev3", 00:13:50.744 "uuid": "8eb38d9c-af97-5b7f-86d5-d0c0b91998b4", 00:13:50.744 "is_configured": true, 00:13:50.744 "data_offset": 2048, 00:13:50.744 "data_size": 63488 00:13:50.744 }, 00:13:50.744 { 00:13:50.744 "name": "BaseBdev4", 00:13:50.744 "uuid": "9e9f6747-17e5-50e0-8588-7672e13c9f8c", 00:13:50.744 "is_configured": true, 00:13:50.744 "data_offset": 2048, 00:13:50.744 "data_size": 63488 00:13:50.744 } 00:13:50.744 ] 00:13:50.744 }' 00:13:50.744 20:09:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:50.744 20:09:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:50.744 20:09:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:50.744 20:09:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:50.744 20:09:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:50.744 20:09:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.744 20:09:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:50.744 [2024-12-08 20:09:22.609009] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:50.744 20:09:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.744 20:09:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:50.744 [2024-12-08 20:09:22.655149] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:13:50.744 [2024-12-08 20:09:22.657067] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:51.004 [2024-12-08 20:09:22.772772] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:51.005 [2024-12-08 20:09:22.773173] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:51.005 [2024-12-08 20:09:22.895433] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:51.005 [2024-12-08 20:09:22.895639] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:51.264 [2024-12-08 20:09:23.151379] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:51.524 150.67 IOPS, 452.00 MiB/s [2024-12-08T20:09:23.502Z] [2024-12-08 20:09:23.383572] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:51.524 [2024-12-08 20:09:23.384450] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:51.785 20:09:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:51.785 20:09:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:51.785 20:09:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:51.785 20:09:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:51.785 20:09:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:51.785 20:09:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:51.785 20:09:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.785 20:09:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:51.785 20:09:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:51.785 20:09:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.785 20:09:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:51.785 "name": "raid_bdev1", 00:13:51.785 "uuid": "87d92a2c-ac83-4f88-9366-ecd7981d9127", 00:13:51.785 "strip_size_kb": 0, 00:13:51.785 "state": "online", 00:13:51.785 "raid_level": "raid1", 00:13:51.785 "superblock": true, 00:13:51.785 "num_base_bdevs": 4, 00:13:51.785 "num_base_bdevs_discovered": 4, 00:13:51.785 "num_base_bdevs_operational": 4, 00:13:51.785 "process": { 00:13:51.785 "type": "rebuild", 00:13:51.785 "target": "spare", 00:13:51.785 "progress": { 00:13:51.785 "blocks": 12288, 00:13:51.785 "percent": 19 00:13:51.785 } 00:13:51.785 }, 00:13:51.785 "base_bdevs_list": [ 00:13:51.785 { 00:13:51.785 "name": "spare", 00:13:51.785 "uuid": "4dedc060-e192-5de4-9085-ba4de69307d6", 00:13:51.785 "is_configured": true, 00:13:51.785 "data_offset": 2048, 00:13:51.785 "data_size": 63488 00:13:51.785 }, 00:13:51.785 { 00:13:51.785 "name": "BaseBdev2", 00:13:51.785 "uuid": "623b99b2-6c39-5bb5-b4b3-d316c5629091", 00:13:51.785 "is_configured": true, 00:13:51.785 "data_offset": 2048, 00:13:51.785 "data_size": 63488 00:13:51.785 }, 00:13:51.785 { 00:13:51.785 "name": "BaseBdev3", 00:13:51.785 "uuid": "8eb38d9c-af97-5b7f-86d5-d0c0b91998b4", 00:13:51.785 "is_configured": true, 00:13:51.785 "data_offset": 2048, 00:13:51.785 "data_size": 63488 00:13:51.785 }, 00:13:51.785 { 00:13:51.785 "name": "BaseBdev4", 00:13:51.785 "uuid": "9e9f6747-17e5-50e0-8588-7672e13c9f8c", 00:13:51.785 "is_configured": true, 00:13:51.785 "data_offset": 2048, 00:13:51.785 "data_size": 63488 00:13:51.785 } 00:13:51.785 ] 00:13:51.785 }' 00:13:51.785 20:09:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:51.785 20:09:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:51.785 [2024-12-08 20:09:23.720876] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:13:51.785 20:09:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:52.046 20:09:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:52.046 20:09:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:13:52.046 20:09:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:13:52.046 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:13:52.046 20:09:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:13:52.046 20:09:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:52.046 20:09:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:13:52.046 20:09:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:52.046 20:09:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.046 20:09:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:52.046 [2024-12-08 20:09:23.781575] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:52.046 [2024-12-08 20:09:23.829932] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:13:52.046 [2024-12-08 20:09:23.830257] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:13:52.046 [2024-12-08 20:09:23.936571] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:13:52.046 [2024-12-08 20:09:23.936635] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:13:52.046 20:09:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.046 20:09:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:13:52.046 20:09:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:13:52.046 20:09:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:52.046 20:09:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:52.046 20:09:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:52.046 20:09:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:52.046 20:09:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:52.046 20:09:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.046 20:09:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:52.046 20:09:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.046 20:09:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:52.046 20:09:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.046 20:09:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:52.046 "name": "raid_bdev1", 00:13:52.046 "uuid": "87d92a2c-ac83-4f88-9366-ecd7981d9127", 00:13:52.046 "strip_size_kb": 0, 00:13:52.046 "state": "online", 00:13:52.046 "raid_level": "raid1", 00:13:52.046 "superblock": true, 00:13:52.046 "num_base_bdevs": 4, 00:13:52.046 "num_base_bdevs_discovered": 3, 00:13:52.046 "num_base_bdevs_operational": 3, 00:13:52.046 "process": { 00:13:52.046 "type": "rebuild", 00:13:52.046 "target": "spare", 00:13:52.046 "progress": { 00:13:52.046 "blocks": 16384, 00:13:52.046 "percent": 25 00:13:52.046 } 00:13:52.046 }, 00:13:52.046 "base_bdevs_list": [ 00:13:52.046 { 00:13:52.046 "name": "spare", 00:13:52.046 "uuid": "4dedc060-e192-5de4-9085-ba4de69307d6", 00:13:52.046 "is_configured": true, 00:13:52.046 "data_offset": 2048, 00:13:52.046 "data_size": 63488 00:13:52.046 }, 00:13:52.046 { 00:13:52.046 "name": null, 00:13:52.046 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:52.046 "is_configured": false, 00:13:52.046 "data_offset": 0, 00:13:52.046 "data_size": 63488 00:13:52.046 }, 00:13:52.046 { 00:13:52.046 "name": "BaseBdev3", 00:13:52.046 "uuid": "8eb38d9c-af97-5b7f-86d5-d0c0b91998b4", 00:13:52.046 "is_configured": true, 00:13:52.046 "data_offset": 2048, 00:13:52.046 "data_size": 63488 00:13:52.046 }, 00:13:52.046 { 00:13:52.046 "name": "BaseBdev4", 00:13:52.046 "uuid": "9e9f6747-17e5-50e0-8588-7672e13c9f8c", 00:13:52.046 "is_configured": true, 00:13:52.046 "data_offset": 2048, 00:13:52.046 "data_size": 63488 00:13:52.046 } 00:13:52.046 ] 00:13:52.046 }' 00:13:52.046 20:09:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:52.307 20:09:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:52.307 20:09:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:52.307 20:09:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:52.307 20:09:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=486 00:13:52.307 20:09:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:52.307 20:09:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:52.307 20:09:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:52.307 20:09:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:52.307 20:09:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:52.307 20:09:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:52.307 20:09:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.307 20:09:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:52.307 20:09:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.307 20:09:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:52.307 20:09:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.307 20:09:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:52.307 "name": "raid_bdev1", 00:13:52.307 "uuid": "87d92a2c-ac83-4f88-9366-ecd7981d9127", 00:13:52.307 "strip_size_kb": 0, 00:13:52.307 "state": "online", 00:13:52.307 "raid_level": "raid1", 00:13:52.307 "superblock": true, 00:13:52.307 "num_base_bdevs": 4, 00:13:52.307 "num_base_bdevs_discovered": 3, 00:13:52.307 "num_base_bdevs_operational": 3, 00:13:52.307 "process": { 00:13:52.307 "type": "rebuild", 00:13:52.307 "target": "spare", 00:13:52.307 "progress": { 00:13:52.307 "blocks": 18432, 00:13:52.307 "percent": 29 00:13:52.307 } 00:13:52.307 }, 00:13:52.307 "base_bdevs_list": [ 00:13:52.307 { 00:13:52.307 "name": "spare", 00:13:52.307 "uuid": "4dedc060-e192-5de4-9085-ba4de69307d6", 00:13:52.307 "is_configured": true, 00:13:52.307 "data_offset": 2048, 00:13:52.307 "data_size": 63488 00:13:52.307 }, 00:13:52.307 { 00:13:52.307 "name": null, 00:13:52.307 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:52.307 "is_configured": false, 00:13:52.307 "data_offset": 0, 00:13:52.307 "data_size": 63488 00:13:52.307 }, 00:13:52.307 { 00:13:52.307 "name": "BaseBdev3", 00:13:52.307 "uuid": "8eb38d9c-af97-5b7f-86d5-d0c0b91998b4", 00:13:52.307 "is_configured": true, 00:13:52.307 "data_offset": 2048, 00:13:52.307 "data_size": 63488 00:13:52.307 }, 00:13:52.307 { 00:13:52.307 "name": "BaseBdev4", 00:13:52.307 "uuid": "9e9f6747-17e5-50e0-8588-7672e13c9f8c", 00:13:52.307 "is_configured": true, 00:13:52.307 "data_offset": 2048, 00:13:52.307 "data_size": 63488 00:13:52.307 } 00:13:52.307 ] 00:13:52.307 }' 00:13:52.307 20:09:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:52.307 20:09:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:52.307 20:09:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:52.307 [2024-12-08 20:09:24.169958] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:13:52.307 20:09:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:52.307 20:09:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:52.567 127.00 IOPS, 381.00 MiB/s [2024-12-08T20:09:24.545Z] [2024-12-08 20:09:24.290944] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:13:53.137 [2024-12-08 20:09:24.927007] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:13:53.395 117.40 IOPS, 352.20 MiB/s [2024-12-08T20:09:25.373Z] 20:09:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:53.395 20:09:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:53.395 20:09:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:53.395 20:09:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:53.395 20:09:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:53.395 20:09:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:53.395 20:09:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:53.395 20:09:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:53.395 20:09:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.395 20:09:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:53.395 20:09:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.395 20:09:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:53.395 "name": "raid_bdev1", 00:13:53.395 "uuid": "87d92a2c-ac83-4f88-9366-ecd7981d9127", 00:13:53.395 "strip_size_kb": 0, 00:13:53.395 "state": "online", 00:13:53.395 "raid_level": "raid1", 00:13:53.395 "superblock": true, 00:13:53.395 "num_base_bdevs": 4, 00:13:53.395 "num_base_bdevs_discovered": 3, 00:13:53.395 "num_base_bdevs_operational": 3, 00:13:53.395 "process": { 00:13:53.395 "type": "rebuild", 00:13:53.395 "target": "spare", 00:13:53.395 "progress": { 00:13:53.395 "blocks": 36864, 00:13:53.395 "percent": 58 00:13:53.395 } 00:13:53.395 }, 00:13:53.395 "base_bdevs_list": [ 00:13:53.395 { 00:13:53.395 "name": "spare", 00:13:53.395 "uuid": "4dedc060-e192-5de4-9085-ba4de69307d6", 00:13:53.395 "is_configured": true, 00:13:53.395 "data_offset": 2048, 00:13:53.395 "data_size": 63488 00:13:53.395 }, 00:13:53.395 { 00:13:53.395 "name": null, 00:13:53.395 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:53.395 "is_configured": false, 00:13:53.395 "data_offset": 0, 00:13:53.395 "data_size": 63488 00:13:53.395 }, 00:13:53.395 { 00:13:53.395 "name": "BaseBdev3", 00:13:53.395 "uuid": "8eb38d9c-af97-5b7f-86d5-d0c0b91998b4", 00:13:53.395 "is_configured": true, 00:13:53.395 "data_offset": 2048, 00:13:53.395 "data_size": 63488 00:13:53.395 }, 00:13:53.395 { 00:13:53.395 "name": "BaseBdev4", 00:13:53.395 "uuid": "9e9f6747-17e5-50e0-8588-7672e13c9f8c", 00:13:53.395 "is_configured": true, 00:13:53.395 "data_offset": 2048, 00:13:53.395 "data_size": 63488 00:13:53.395 } 00:13:53.395 ] 00:13:53.395 }' 00:13:53.395 20:09:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:53.395 20:09:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:53.395 20:09:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:53.395 20:09:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:53.395 20:09:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:54.594 105.83 IOPS, 317.50 MiB/s [2024-12-08T20:09:26.572Z] 20:09:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:54.594 20:09:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:54.594 20:09:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:54.594 20:09:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:54.594 20:09:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:54.594 20:09:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:54.594 20:09:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:54.594 20:09:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.594 20:09:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:54.594 20:09:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:54.594 [2024-12-08 20:09:26.368475] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:13:54.594 20:09:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.594 20:09:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:54.594 "name": "raid_bdev1", 00:13:54.594 "uuid": "87d92a2c-ac83-4f88-9366-ecd7981d9127", 00:13:54.594 "strip_size_kb": 0, 00:13:54.594 "state": "online", 00:13:54.594 "raid_level": "raid1", 00:13:54.594 "superblock": true, 00:13:54.594 "num_base_bdevs": 4, 00:13:54.594 "num_base_bdevs_discovered": 3, 00:13:54.594 "num_base_bdevs_operational": 3, 00:13:54.594 "process": { 00:13:54.594 "type": "rebuild", 00:13:54.594 "target": "spare", 00:13:54.594 "progress": { 00:13:54.594 "blocks": 57344, 00:13:54.594 "percent": 90 00:13:54.594 } 00:13:54.594 }, 00:13:54.594 "base_bdevs_list": [ 00:13:54.594 { 00:13:54.594 "name": "spare", 00:13:54.594 "uuid": "4dedc060-e192-5de4-9085-ba4de69307d6", 00:13:54.594 "is_configured": true, 00:13:54.594 "data_offset": 2048, 00:13:54.594 "data_size": 63488 00:13:54.594 }, 00:13:54.594 { 00:13:54.594 "name": null, 00:13:54.594 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:54.594 "is_configured": false, 00:13:54.594 "data_offset": 0, 00:13:54.594 "data_size": 63488 00:13:54.594 }, 00:13:54.594 { 00:13:54.594 "name": "BaseBdev3", 00:13:54.594 "uuid": "8eb38d9c-af97-5b7f-86d5-d0c0b91998b4", 00:13:54.594 "is_configured": true, 00:13:54.594 "data_offset": 2048, 00:13:54.594 "data_size": 63488 00:13:54.594 }, 00:13:54.594 { 00:13:54.594 "name": "BaseBdev4", 00:13:54.594 "uuid": "9e9f6747-17e5-50e0-8588-7672e13c9f8c", 00:13:54.594 "is_configured": true, 00:13:54.594 "data_offset": 2048, 00:13:54.594 "data_size": 63488 00:13:54.594 } 00:13:54.594 ] 00:13:54.594 }' 00:13:54.594 20:09:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:54.594 20:09:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:54.594 20:09:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:54.594 20:09:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:54.594 20:09:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:54.854 [2024-12-08 20:09:26.698796] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:54.854 [2024-12-08 20:09:26.798557] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:54.854 [2024-12-08 20:09:26.801021] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:55.684 95.43 IOPS, 286.29 MiB/s [2024-12-08T20:09:27.662Z] 20:09:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:55.684 20:09:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:55.684 20:09:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:55.684 20:09:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:55.684 20:09:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:55.684 20:09:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:55.684 20:09:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:55.684 20:09:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:55.684 20:09:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.684 20:09:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:55.684 20:09:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.684 20:09:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:55.684 "name": "raid_bdev1", 00:13:55.684 "uuid": "87d92a2c-ac83-4f88-9366-ecd7981d9127", 00:13:55.684 "strip_size_kb": 0, 00:13:55.684 "state": "online", 00:13:55.684 "raid_level": "raid1", 00:13:55.684 "superblock": true, 00:13:55.684 "num_base_bdevs": 4, 00:13:55.684 "num_base_bdevs_discovered": 3, 00:13:55.684 "num_base_bdevs_operational": 3, 00:13:55.684 "base_bdevs_list": [ 00:13:55.684 { 00:13:55.684 "name": "spare", 00:13:55.684 "uuid": "4dedc060-e192-5de4-9085-ba4de69307d6", 00:13:55.684 "is_configured": true, 00:13:55.684 "data_offset": 2048, 00:13:55.684 "data_size": 63488 00:13:55.684 }, 00:13:55.684 { 00:13:55.684 "name": null, 00:13:55.684 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:55.684 "is_configured": false, 00:13:55.684 "data_offset": 0, 00:13:55.684 "data_size": 63488 00:13:55.684 }, 00:13:55.684 { 00:13:55.684 "name": "BaseBdev3", 00:13:55.684 "uuid": "8eb38d9c-af97-5b7f-86d5-d0c0b91998b4", 00:13:55.684 "is_configured": true, 00:13:55.684 "data_offset": 2048, 00:13:55.684 "data_size": 63488 00:13:55.684 }, 00:13:55.684 { 00:13:55.684 "name": "BaseBdev4", 00:13:55.684 "uuid": "9e9f6747-17e5-50e0-8588-7672e13c9f8c", 00:13:55.684 "is_configured": true, 00:13:55.684 "data_offset": 2048, 00:13:55.684 "data_size": 63488 00:13:55.684 } 00:13:55.684 ] 00:13:55.684 }' 00:13:55.684 20:09:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:55.684 20:09:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:55.684 20:09:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:55.684 20:09:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:55.684 20:09:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:13:55.684 20:09:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:55.684 20:09:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:55.684 20:09:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:55.684 20:09:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:55.684 20:09:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:55.684 20:09:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:55.684 20:09:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.684 20:09:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:55.684 20:09:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:55.944 20:09:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.944 20:09:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:55.944 "name": "raid_bdev1", 00:13:55.944 "uuid": "87d92a2c-ac83-4f88-9366-ecd7981d9127", 00:13:55.944 "strip_size_kb": 0, 00:13:55.944 "state": "online", 00:13:55.944 "raid_level": "raid1", 00:13:55.944 "superblock": true, 00:13:55.944 "num_base_bdevs": 4, 00:13:55.944 "num_base_bdevs_discovered": 3, 00:13:55.944 "num_base_bdevs_operational": 3, 00:13:55.944 "base_bdevs_list": [ 00:13:55.944 { 00:13:55.944 "name": "spare", 00:13:55.944 "uuid": "4dedc060-e192-5de4-9085-ba4de69307d6", 00:13:55.944 "is_configured": true, 00:13:55.944 "data_offset": 2048, 00:13:55.944 "data_size": 63488 00:13:55.944 }, 00:13:55.944 { 00:13:55.944 "name": null, 00:13:55.944 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:55.944 "is_configured": false, 00:13:55.944 "data_offset": 0, 00:13:55.944 "data_size": 63488 00:13:55.944 }, 00:13:55.944 { 00:13:55.944 "name": "BaseBdev3", 00:13:55.944 "uuid": "8eb38d9c-af97-5b7f-86d5-d0c0b91998b4", 00:13:55.944 "is_configured": true, 00:13:55.944 "data_offset": 2048, 00:13:55.944 "data_size": 63488 00:13:55.944 }, 00:13:55.944 { 00:13:55.944 "name": "BaseBdev4", 00:13:55.944 "uuid": "9e9f6747-17e5-50e0-8588-7672e13c9f8c", 00:13:55.944 "is_configured": true, 00:13:55.944 "data_offset": 2048, 00:13:55.944 "data_size": 63488 00:13:55.944 } 00:13:55.944 ] 00:13:55.944 }' 00:13:55.944 20:09:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:55.944 20:09:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:55.944 20:09:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:55.944 20:09:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:55.944 20:09:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:55.944 20:09:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:55.944 20:09:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:55.944 20:09:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:55.944 20:09:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:55.944 20:09:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:55.944 20:09:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:55.944 20:09:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:55.944 20:09:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:55.944 20:09:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:55.944 20:09:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:55.944 20:09:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.944 20:09:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:55.944 20:09:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:55.944 20:09:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.944 20:09:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:55.944 "name": "raid_bdev1", 00:13:55.944 "uuid": "87d92a2c-ac83-4f88-9366-ecd7981d9127", 00:13:55.944 "strip_size_kb": 0, 00:13:55.944 "state": "online", 00:13:55.944 "raid_level": "raid1", 00:13:55.944 "superblock": true, 00:13:55.944 "num_base_bdevs": 4, 00:13:55.944 "num_base_bdevs_discovered": 3, 00:13:55.944 "num_base_bdevs_operational": 3, 00:13:55.944 "base_bdevs_list": [ 00:13:55.944 { 00:13:55.944 "name": "spare", 00:13:55.944 "uuid": "4dedc060-e192-5de4-9085-ba4de69307d6", 00:13:55.944 "is_configured": true, 00:13:55.944 "data_offset": 2048, 00:13:55.944 "data_size": 63488 00:13:55.944 }, 00:13:55.944 { 00:13:55.944 "name": null, 00:13:55.944 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:55.944 "is_configured": false, 00:13:55.944 "data_offset": 0, 00:13:55.944 "data_size": 63488 00:13:55.944 }, 00:13:55.944 { 00:13:55.944 "name": "BaseBdev3", 00:13:55.944 "uuid": "8eb38d9c-af97-5b7f-86d5-d0c0b91998b4", 00:13:55.944 "is_configured": true, 00:13:55.944 "data_offset": 2048, 00:13:55.944 "data_size": 63488 00:13:55.944 }, 00:13:55.944 { 00:13:55.944 "name": "BaseBdev4", 00:13:55.944 "uuid": "9e9f6747-17e5-50e0-8588-7672e13c9f8c", 00:13:55.944 "is_configured": true, 00:13:55.944 "data_offset": 2048, 00:13:55.944 "data_size": 63488 00:13:55.944 } 00:13:55.944 ] 00:13:55.944 }' 00:13:55.944 20:09:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:55.944 20:09:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:56.512 20:09:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:56.512 20:09:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.512 20:09:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:56.512 [2024-12-08 20:09:28.199388] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:56.512 [2024-12-08 20:09:28.199421] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:56.512 88.00 IOPS, 264.00 MiB/s 00:13:56.512 Latency(us) 00:13:56.512 [2024-12-08T20:09:28.490Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:56.512 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:13:56.512 raid_bdev1 : 8.10 87.15 261.45 0.00 0.00 16046.68 293.34 117220.72 00:13:56.512 [2024-12-08T20:09:28.490Z] =================================================================================================================== 00:13:56.512 [2024-12-08T20:09:28.490Z] Total : 87.15 261.45 0.00 0.00 16046.68 293.34 117220.72 00:13:56.512 { 00:13:56.512 "results": [ 00:13:56.512 { 00:13:56.512 "job": "raid_bdev1", 00:13:56.512 "core_mask": "0x1", 00:13:56.512 "workload": "randrw", 00:13:56.512 "percentage": 50, 00:13:56.512 "status": "finished", 00:13:56.512 "queue_depth": 2, 00:13:56.512 "io_size": 3145728, 00:13:56.512 "runtime": 8.101122, 00:13:56.512 "iops": 87.1484221568321, 00:13:56.512 "mibps": 261.4452664704963, 00:13:56.512 "io_failed": 0, 00:13:56.512 "io_timeout": 0, 00:13:56.512 "avg_latency_us": 16046.684364832936, 00:13:56.512 "min_latency_us": 293.3379912663755, 00:13:56.512 "max_latency_us": 117220.7231441048 00:13:56.512 } 00:13:56.512 ], 00:13:56.512 "core_count": 1 00:13:56.512 } 00:13:56.512 [2024-12-08 20:09:28.320021] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:56.512 [2024-12-08 20:09:28.320101] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:56.512 [2024-12-08 20:09:28.320198] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:56.512 [2024-12-08 20:09:28.320213] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:56.512 20:09:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.512 20:09:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.512 20:09:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.512 20:09:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:13:56.512 20:09:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:56.512 20:09:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.512 20:09:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:56.512 20:09:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:56.512 20:09:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:13:56.512 20:09:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:13:56.512 20:09:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:56.512 20:09:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:13:56.512 20:09:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:56.512 20:09:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:56.512 20:09:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:56.512 20:09:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:13:56.512 20:09:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:56.512 20:09:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:56.512 20:09:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:13:56.771 /dev/nbd0 00:13:56.771 20:09:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:56.771 20:09:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:56.771 20:09:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:56.771 20:09:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:13:56.771 20:09:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:56.771 20:09:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:56.771 20:09:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:56.771 20:09:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:13:56.771 20:09:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:56.771 20:09:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:56.771 20:09:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:56.771 1+0 records in 00:13:56.771 1+0 records out 00:13:56.771 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000224571 s, 18.2 MB/s 00:13:56.771 20:09:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:56.771 20:09:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:13:56.771 20:09:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:56.771 20:09:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:56.771 20:09:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:13:56.771 20:09:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:56.771 20:09:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:56.771 20:09:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:13:56.771 20:09:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:13:56.771 20:09:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@728 -- # continue 00:13:56.771 20:09:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:13:56.771 20:09:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:13:56.771 20:09:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:13:56.771 20:09:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:56.771 20:09:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:13:56.771 20:09:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:56.771 20:09:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:13:56.771 20:09:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:56.771 20:09:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:13:56.771 20:09:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:56.771 20:09:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:56.771 20:09:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:13:57.030 /dev/nbd1 00:13:57.031 20:09:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:57.031 20:09:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:57.031 20:09:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:57.031 20:09:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:13:57.031 20:09:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:57.031 20:09:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:57.031 20:09:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:57.031 20:09:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:13:57.031 20:09:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:57.031 20:09:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:57.031 20:09:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:57.031 1+0 records in 00:13:57.031 1+0 records out 00:13:57.031 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0002612 s, 15.7 MB/s 00:13:57.031 20:09:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:57.031 20:09:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:13:57.031 20:09:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:57.031 20:09:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:57.031 20:09:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:13:57.031 20:09:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:57.031 20:09:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:57.031 20:09:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:13:57.290 20:09:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:13:57.290 20:09:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:57.290 20:09:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:13:57.290 20:09:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:57.290 20:09:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:13:57.290 20:09:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:57.290 20:09:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:57.290 20:09:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:57.290 20:09:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:57.290 20:09:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:57.290 20:09:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:57.290 20:09:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:57.290 20:09:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:57.290 20:09:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:13:57.290 20:09:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:57.290 20:09:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:13:57.290 20:09:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:13:57.290 20:09:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:13:57.290 20:09:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:57.290 20:09:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:13:57.290 20:09:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:57.290 20:09:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:13:57.290 20:09:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:57.290 20:09:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:13:57.290 20:09:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:57.290 20:09:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:57.290 20:09:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:13:57.549 /dev/nbd1 00:13:57.549 20:09:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:57.549 20:09:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:57.549 20:09:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:57.549 20:09:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:13:57.549 20:09:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:57.549 20:09:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:57.549 20:09:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:57.549 20:09:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:13:57.549 20:09:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:57.549 20:09:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:57.549 20:09:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:57.549 1+0 records in 00:13:57.549 1+0 records out 00:13:57.549 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000363432 s, 11.3 MB/s 00:13:57.549 20:09:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:57.549 20:09:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:13:57.549 20:09:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:57.549 20:09:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:57.549 20:09:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:13:57.549 20:09:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:57.549 20:09:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:57.549 20:09:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:13:57.809 20:09:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:13:57.809 20:09:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:57.809 20:09:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:13:57.809 20:09:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:57.809 20:09:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:13:57.809 20:09:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:57.809 20:09:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:57.809 20:09:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:58.067 20:09:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:58.067 20:09:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:58.067 20:09:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:58.067 20:09:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:58.067 20:09:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:58.067 20:09:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:13:58.067 20:09:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:58.067 20:09:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:58.067 20:09:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:58.068 20:09:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:58.068 20:09:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:58.068 20:09:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:13:58.068 20:09:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:58.068 20:09:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:58.068 20:09:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:58.068 20:09:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:58.068 20:09:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:58.068 20:09:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:58.068 20:09:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:58.068 20:09:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:58.068 20:09:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:13:58.068 20:09:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:58.068 20:09:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:13:58.068 20:09:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:13:58.068 20:09:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.068 20:09:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:58.068 20:09:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.068 20:09:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:58.068 20:09:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.068 20:09:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:58.068 [2024-12-08 20:09:30.020397] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:58.068 [2024-12-08 20:09:30.020496] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:58.068 [2024-12-08 20:09:30.020545] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:13:58.068 [2024-12-08 20:09:30.020589] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:58.068 [2024-12-08 20:09:30.022799] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:58.068 [2024-12-08 20:09:30.022874] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:58.068 [2024-12-08 20:09:30.023024] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:58.068 [2024-12-08 20:09:30.023125] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:58.068 [2024-12-08 20:09:30.023306] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:58.068 [2024-12-08 20:09:30.023469] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:58.068 spare 00:13:58.068 20:09:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.068 20:09:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:13:58.068 20:09:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.068 20:09:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:58.327 [2024-12-08 20:09:30.123403] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:13:58.327 [2024-12-08 20:09:30.123465] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:58.327 [2024-12-08 20:09:30.123817] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037160 00:13:58.327 [2024-12-08 20:09:30.124049] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:13:58.327 [2024-12-08 20:09:30.124092] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:13:58.327 [2024-12-08 20:09:30.124327] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:58.327 20:09:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.327 20:09:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:58.327 20:09:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:58.327 20:09:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:58.327 20:09:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:58.327 20:09:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:58.327 20:09:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:58.327 20:09:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:58.327 20:09:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:58.327 20:09:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:58.327 20:09:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:58.327 20:09:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.327 20:09:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:58.327 20:09:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.327 20:09:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:58.327 20:09:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.327 20:09:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:58.327 "name": "raid_bdev1", 00:13:58.327 "uuid": "87d92a2c-ac83-4f88-9366-ecd7981d9127", 00:13:58.327 "strip_size_kb": 0, 00:13:58.327 "state": "online", 00:13:58.327 "raid_level": "raid1", 00:13:58.327 "superblock": true, 00:13:58.327 "num_base_bdevs": 4, 00:13:58.327 "num_base_bdevs_discovered": 3, 00:13:58.327 "num_base_bdevs_operational": 3, 00:13:58.327 "base_bdevs_list": [ 00:13:58.327 { 00:13:58.327 "name": "spare", 00:13:58.327 "uuid": "4dedc060-e192-5de4-9085-ba4de69307d6", 00:13:58.327 "is_configured": true, 00:13:58.327 "data_offset": 2048, 00:13:58.327 "data_size": 63488 00:13:58.327 }, 00:13:58.327 { 00:13:58.327 "name": null, 00:13:58.327 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:58.327 "is_configured": false, 00:13:58.327 "data_offset": 2048, 00:13:58.327 "data_size": 63488 00:13:58.327 }, 00:13:58.327 { 00:13:58.327 "name": "BaseBdev3", 00:13:58.327 "uuid": "8eb38d9c-af97-5b7f-86d5-d0c0b91998b4", 00:13:58.327 "is_configured": true, 00:13:58.327 "data_offset": 2048, 00:13:58.327 "data_size": 63488 00:13:58.327 }, 00:13:58.327 { 00:13:58.327 "name": "BaseBdev4", 00:13:58.327 "uuid": "9e9f6747-17e5-50e0-8588-7672e13c9f8c", 00:13:58.327 "is_configured": true, 00:13:58.327 "data_offset": 2048, 00:13:58.327 "data_size": 63488 00:13:58.327 } 00:13:58.327 ] 00:13:58.327 }' 00:13:58.327 20:09:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:58.327 20:09:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:58.586 20:09:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:58.586 20:09:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:58.586 20:09:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:58.586 20:09:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:58.586 20:09:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:58.586 20:09:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.586 20:09:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.586 20:09:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:58.586 20:09:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:58.845 20:09:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.845 20:09:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:58.845 "name": "raid_bdev1", 00:13:58.845 "uuid": "87d92a2c-ac83-4f88-9366-ecd7981d9127", 00:13:58.845 "strip_size_kb": 0, 00:13:58.845 "state": "online", 00:13:58.845 "raid_level": "raid1", 00:13:58.845 "superblock": true, 00:13:58.845 "num_base_bdevs": 4, 00:13:58.845 "num_base_bdevs_discovered": 3, 00:13:58.845 "num_base_bdevs_operational": 3, 00:13:58.845 "base_bdevs_list": [ 00:13:58.845 { 00:13:58.845 "name": "spare", 00:13:58.845 "uuid": "4dedc060-e192-5de4-9085-ba4de69307d6", 00:13:58.845 "is_configured": true, 00:13:58.845 "data_offset": 2048, 00:13:58.845 "data_size": 63488 00:13:58.845 }, 00:13:58.845 { 00:13:58.845 "name": null, 00:13:58.845 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:58.845 "is_configured": false, 00:13:58.845 "data_offset": 2048, 00:13:58.845 "data_size": 63488 00:13:58.845 }, 00:13:58.845 { 00:13:58.845 "name": "BaseBdev3", 00:13:58.845 "uuid": "8eb38d9c-af97-5b7f-86d5-d0c0b91998b4", 00:13:58.845 "is_configured": true, 00:13:58.845 "data_offset": 2048, 00:13:58.845 "data_size": 63488 00:13:58.845 }, 00:13:58.845 { 00:13:58.845 "name": "BaseBdev4", 00:13:58.845 "uuid": "9e9f6747-17e5-50e0-8588-7672e13c9f8c", 00:13:58.845 "is_configured": true, 00:13:58.845 "data_offset": 2048, 00:13:58.845 "data_size": 63488 00:13:58.845 } 00:13:58.845 ] 00:13:58.845 }' 00:13:58.845 20:09:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:58.845 20:09:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:58.845 20:09:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:58.845 20:09:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:58.845 20:09:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:13:58.845 20:09:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.845 20:09:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.845 20:09:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:58.845 20:09:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.845 20:09:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:13:58.845 20:09:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:58.845 20:09:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.845 20:09:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:58.845 [2024-12-08 20:09:30.723351] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:58.845 20:09:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.845 20:09:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:58.845 20:09:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:58.845 20:09:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:58.845 20:09:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:58.845 20:09:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:58.845 20:09:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:58.845 20:09:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:58.845 20:09:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:58.845 20:09:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:58.845 20:09:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:58.845 20:09:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.845 20:09:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:58.845 20:09:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.845 20:09:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:58.845 20:09:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.845 20:09:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:58.845 "name": "raid_bdev1", 00:13:58.845 "uuid": "87d92a2c-ac83-4f88-9366-ecd7981d9127", 00:13:58.845 "strip_size_kb": 0, 00:13:58.845 "state": "online", 00:13:58.845 "raid_level": "raid1", 00:13:58.845 "superblock": true, 00:13:58.845 "num_base_bdevs": 4, 00:13:58.845 "num_base_bdevs_discovered": 2, 00:13:58.845 "num_base_bdevs_operational": 2, 00:13:58.845 "base_bdevs_list": [ 00:13:58.845 { 00:13:58.845 "name": null, 00:13:58.845 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:58.845 "is_configured": false, 00:13:58.845 "data_offset": 0, 00:13:58.845 "data_size": 63488 00:13:58.845 }, 00:13:58.845 { 00:13:58.845 "name": null, 00:13:58.845 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:58.845 "is_configured": false, 00:13:58.845 "data_offset": 2048, 00:13:58.845 "data_size": 63488 00:13:58.845 }, 00:13:58.845 { 00:13:58.845 "name": "BaseBdev3", 00:13:58.845 "uuid": "8eb38d9c-af97-5b7f-86d5-d0c0b91998b4", 00:13:58.845 "is_configured": true, 00:13:58.845 "data_offset": 2048, 00:13:58.845 "data_size": 63488 00:13:58.845 }, 00:13:58.845 { 00:13:58.845 "name": "BaseBdev4", 00:13:58.845 "uuid": "9e9f6747-17e5-50e0-8588-7672e13c9f8c", 00:13:58.845 "is_configured": true, 00:13:58.845 "data_offset": 2048, 00:13:58.845 "data_size": 63488 00:13:58.845 } 00:13:58.845 ] 00:13:58.845 }' 00:13:58.845 20:09:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:58.845 20:09:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:59.414 20:09:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:59.414 20:09:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.414 20:09:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:59.414 [2024-12-08 20:09:31.174668] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:59.414 [2024-12-08 20:09:31.174899] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:13:59.414 [2024-12-08 20:09:31.174991] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:59.414 [2024-12-08 20:09:31.175061] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:59.414 [2024-12-08 20:09:31.189360] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037230 00:13:59.414 20:09:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.414 20:09:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:13:59.414 [2024-12-08 20:09:31.191182] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:00.352 20:09:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:00.352 20:09:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:00.352 20:09:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:00.352 20:09:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:00.352 20:09:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:00.352 20:09:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.352 20:09:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:00.352 20:09:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.352 20:09:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:00.352 20:09:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.352 20:09:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:00.352 "name": "raid_bdev1", 00:14:00.352 "uuid": "87d92a2c-ac83-4f88-9366-ecd7981d9127", 00:14:00.352 "strip_size_kb": 0, 00:14:00.352 "state": "online", 00:14:00.352 "raid_level": "raid1", 00:14:00.352 "superblock": true, 00:14:00.352 "num_base_bdevs": 4, 00:14:00.352 "num_base_bdevs_discovered": 3, 00:14:00.352 "num_base_bdevs_operational": 3, 00:14:00.352 "process": { 00:14:00.352 "type": "rebuild", 00:14:00.352 "target": "spare", 00:14:00.352 "progress": { 00:14:00.352 "blocks": 20480, 00:14:00.352 "percent": 32 00:14:00.352 } 00:14:00.352 }, 00:14:00.352 "base_bdevs_list": [ 00:14:00.352 { 00:14:00.352 "name": "spare", 00:14:00.352 "uuid": "4dedc060-e192-5de4-9085-ba4de69307d6", 00:14:00.352 "is_configured": true, 00:14:00.352 "data_offset": 2048, 00:14:00.352 "data_size": 63488 00:14:00.352 }, 00:14:00.352 { 00:14:00.352 "name": null, 00:14:00.352 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:00.352 "is_configured": false, 00:14:00.352 "data_offset": 2048, 00:14:00.352 "data_size": 63488 00:14:00.352 }, 00:14:00.352 { 00:14:00.352 "name": "BaseBdev3", 00:14:00.352 "uuid": "8eb38d9c-af97-5b7f-86d5-d0c0b91998b4", 00:14:00.352 "is_configured": true, 00:14:00.352 "data_offset": 2048, 00:14:00.352 "data_size": 63488 00:14:00.352 }, 00:14:00.352 { 00:14:00.352 "name": "BaseBdev4", 00:14:00.352 "uuid": "9e9f6747-17e5-50e0-8588-7672e13c9f8c", 00:14:00.352 "is_configured": true, 00:14:00.352 "data_offset": 2048, 00:14:00.352 "data_size": 63488 00:14:00.352 } 00:14:00.352 ] 00:14:00.352 }' 00:14:00.352 20:09:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:00.352 20:09:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:00.352 20:09:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:00.353 20:09:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:00.353 20:09:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:14:00.353 20:09:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.353 20:09:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:00.612 [2024-12-08 20:09:32.331093] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:00.612 [2024-12-08 20:09:32.396007] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:00.612 [2024-12-08 20:09:32.396103] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:00.612 [2024-12-08 20:09:32.396120] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:00.612 [2024-12-08 20:09:32.396129] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:00.612 20:09:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.612 20:09:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:00.612 20:09:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:00.612 20:09:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:00.612 20:09:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:00.612 20:09:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:00.612 20:09:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:00.612 20:09:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:00.612 20:09:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:00.612 20:09:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:00.612 20:09:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:00.612 20:09:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.612 20:09:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:00.612 20:09:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.612 20:09:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:00.612 20:09:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.612 20:09:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:00.612 "name": "raid_bdev1", 00:14:00.612 "uuid": "87d92a2c-ac83-4f88-9366-ecd7981d9127", 00:14:00.612 "strip_size_kb": 0, 00:14:00.612 "state": "online", 00:14:00.612 "raid_level": "raid1", 00:14:00.612 "superblock": true, 00:14:00.612 "num_base_bdevs": 4, 00:14:00.613 "num_base_bdevs_discovered": 2, 00:14:00.613 "num_base_bdevs_operational": 2, 00:14:00.613 "base_bdevs_list": [ 00:14:00.613 { 00:14:00.613 "name": null, 00:14:00.613 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:00.613 "is_configured": false, 00:14:00.613 "data_offset": 0, 00:14:00.613 "data_size": 63488 00:14:00.613 }, 00:14:00.613 { 00:14:00.613 "name": null, 00:14:00.613 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:00.613 "is_configured": false, 00:14:00.613 "data_offset": 2048, 00:14:00.613 "data_size": 63488 00:14:00.613 }, 00:14:00.613 { 00:14:00.613 "name": "BaseBdev3", 00:14:00.613 "uuid": "8eb38d9c-af97-5b7f-86d5-d0c0b91998b4", 00:14:00.613 "is_configured": true, 00:14:00.613 "data_offset": 2048, 00:14:00.613 "data_size": 63488 00:14:00.613 }, 00:14:00.613 { 00:14:00.613 "name": "BaseBdev4", 00:14:00.613 "uuid": "9e9f6747-17e5-50e0-8588-7672e13c9f8c", 00:14:00.613 "is_configured": true, 00:14:00.613 "data_offset": 2048, 00:14:00.613 "data_size": 63488 00:14:00.613 } 00:14:00.613 ] 00:14:00.613 }' 00:14:00.613 20:09:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:00.613 20:09:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:01.182 20:09:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:01.182 20:09:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.182 20:09:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:01.182 [2024-12-08 20:09:32.907877] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:01.182 [2024-12-08 20:09:32.908005] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:01.182 [2024-12-08 20:09:32.908071] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:14:01.182 [2024-12-08 20:09:32.908118] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:01.182 [2024-12-08 20:09:32.908634] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:01.182 [2024-12-08 20:09:32.908722] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:01.182 [2024-12-08 20:09:32.908841] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:01.182 [2024-12-08 20:09:32.908883] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:14:01.182 [2024-12-08 20:09:32.908939] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:01.182 [2024-12-08 20:09:32.909017] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:01.182 [2024-12-08 20:09:32.923480] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037300 00:14:01.182 spare 00:14:01.182 20:09:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.182 [2024-12-08 20:09:32.925299] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:01.183 20:09:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:14:02.120 20:09:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:02.120 20:09:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:02.120 20:09:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:02.120 20:09:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:02.120 20:09:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:02.120 20:09:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.120 20:09:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:02.120 20:09:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.120 20:09:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:02.120 20:09:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.120 20:09:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:02.120 "name": "raid_bdev1", 00:14:02.120 "uuid": "87d92a2c-ac83-4f88-9366-ecd7981d9127", 00:14:02.120 "strip_size_kb": 0, 00:14:02.120 "state": "online", 00:14:02.120 "raid_level": "raid1", 00:14:02.120 "superblock": true, 00:14:02.120 "num_base_bdevs": 4, 00:14:02.120 "num_base_bdevs_discovered": 3, 00:14:02.120 "num_base_bdevs_operational": 3, 00:14:02.120 "process": { 00:14:02.120 "type": "rebuild", 00:14:02.120 "target": "spare", 00:14:02.120 "progress": { 00:14:02.120 "blocks": 20480, 00:14:02.120 "percent": 32 00:14:02.120 } 00:14:02.120 }, 00:14:02.120 "base_bdevs_list": [ 00:14:02.120 { 00:14:02.120 "name": "spare", 00:14:02.120 "uuid": "4dedc060-e192-5de4-9085-ba4de69307d6", 00:14:02.120 "is_configured": true, 00:14:02.120 "data_offset": 2048, 00:14:02.120 "data_size": 63488 00:14:02.120 }, 00:14:02.120 { 00:14:02.120 "name": null, 00:14:02.120 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:02.120 "is_configured": false, 00:14:02.120 "data_offset": 2048, 00:14:02.120 "data_size": 63488 00:14:02.120 }, 00:14:02.120 { 00:14:02.120 "name": "BaseBdev3", 00:14:02.120 "uuid": "8eb38d9c-af97-5b7f-86d5-d0c0b91998b4", 00:14:02.120 "is_configured": true, 00:14:02.120 "data_offset": 2048, 00:14:02.120 "data_size": 63488 00:14:02.120 }, 00:14:02.120 { 00:14:02.120 "name": "BaseBdev4", 00:14:02.120 "uuid": "9e9f6747-17e5-50e0-8588-7672e13c9f8c", 00:14:02.120 "is_configured": true, 00:14:02.120 "data_offset": 2048, 00:14:02.120 "data_size": 63488 00:14:02.120 } 00:14:02.120 ] 00:14:02.120 }' 00:14:02.120 20:09:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:02.120 20:09:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:02.120 20:09:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:02.120 20:09:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:02.120 20:09:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:14:02.120 20:09:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.120 20:09:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:02.120 [2024-12-08 20:09:34.085241] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:02.380 [2024-12-08 20:09:34.130098] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:02.380 [2024-12-08 20:09:34.130219] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:02.380 [2024-12-08 20:09:34.130242] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:02.380 [2024-12-08 20:09:34.130250] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:02.380 20:09:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.380 20:09:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:02.380 20:09:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:02.380 20:09:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:02.380 20:09:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:02.380 20:09:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:02.380 20:09:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:02.380 20:09:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:02.380 20:09:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:02.380 20:09:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:02.380 20:09:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:02.380 20:09:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.380 20:09:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:02.380 20:09:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.380 20:09:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:02.380 20:09:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.380 20:09:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:02.380 "name": "raid_bdev1", 00:14:02.380 "uuid": "87d92a2c-ac83-4f88-9366-ecd7981d9127", 00:14:02.380 "strip_size_kb": 0, 00:14:02.380 "state": "online", 00:14:02.380 "raid_level": "raid1", 00:14:02.380 "superblock": true, 00:14:02.380 "num_base_bdevs": 4, 00:14:02.380 "num_base_bdevs_discovered": 2, 00:14:02.380 "num_base_bdevs_operational": 2, 00:14:02.380 "base_bdevs_list": [ 00:14:02.380 { 00:14:02.380 "name": null, 00:14:02.380 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:02.380 "is_configured": false, 00:14:02.380 "data_offset": 0, 00:14:02.380 "data_size": 63488 00:14:02.380 }, 00:14:02.380 { 00:14:02.380 "name": null, 00:14:02.380 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:02.380 "is_configured": false, 00:14:02.380 "data_offset": 2048, 00:14:02.380 "data_size": 63488 00:14:02.380 }, 00:14:02.380 { 00:14:02.380 "name": "BaseBdev3", 00:14:02.380 "uuid": "8eb38d9c-af97-5b7f-86d5-d0c0b91998b4", 00:14:02.380 "is_configured": true, 00:14:02.380 "data_offset": 2048, 00:14:02.380 "data_size": 63488 00:14:02.380 }, 00:14:02.380 { 00:14:02.380 "name": "BaseBdev4", 00:14:02.380 "uuid": "9e9f6747-17e5-50e0-8588-7672e13c9f8c", 00:14:02.380 "is_configured": true, 00:14:02.380 "data_offset": 2048, 00:14:02.380 "data_size": 63488 00:14:02.380 } 00:14:02.380 ] 00:14:02.380 }' 00:14:02.380 20:09:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:02.380 20:09:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:02.639 20:09:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:02.639 20:09:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:02.639 20:09:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:02.639 20:09:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:02.639 20:09:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:02.898 20:09:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.898 20:09:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:02.898 20:09:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.898 20:09:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:02.898 20:09:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.898 20:09:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:02.898 "name": "raid_bdev1", 00:14:02.898 "uuid": "87d92a2c-ac83-4f88-9366-ecd7981d9127", 00:14:02.898 "strip_size_kb": 0, 00:14:02.898 "state": "online", 00:14:02.898 "raid_level": "raid1", 00:14:02.898 "superblock": true, 00:14:02.898 "num_base_bdevs": 4, 00:14:02.898 "num_base_bdevs_discovered": 2, 00:14:02.898 "num_base_bdevs_operational": 2, 00:14:02.898 "base_bdevs_list": [ 00:14:02.898 { 00:14:02.898 "name": null, 00:14:02.898 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:02.898 "is_configured": false, 00:14:02.898 "data_offset": 0, 00:14:02.898 "data_size": 63488 00:14:02.898 }, 00:14:02.898 { 00:14:02.898 "name": null, 00:14:02.898 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:02.898 "is_configured": false, 00:14:02.898 "data_offset": 2048, 00:14:02.898 "data_size": 63488 00:14:02.898 }, 00:14:02.898 { 00:14:02.898 "name": "BaseBdev3", 00:14:02.898 "uuid": "8eb38d9c-af97-5b7f-86d5-d0c0b91998b4", 00:14:02.898 "is_configured": true, 00:14:02.898 "data_offset": 2048, 00:14:02.898 "data_size": 63488 00:14:02.898 }, 00:14:02.898 { 00:14:02.898 "name": "BaseBdev4", 00:14:02.898 "uuid": "9e9f6747-17e5-50e0-8588-7672e13c9f8c", 00:14:02.898 "is_configured": true, 00:14:02.898 "data_offset": 2048, 00:14:02.898 "data_size": 63488 00:14:02.898 } 00:14:02.898 ] 00:14:02.898 }' 00:14:02.898 20:09:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:02.898 20:09:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:02.898 20:09:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:02.898 20:09:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:02.898 20:09:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:14:02.898 20:09:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.898 20:09:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:02.898 20:09:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.898 20:09:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:02.898 20:09:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.898 20:09:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:02.898 [2024-12-08 20:09:34.758327] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:02.898 [2024-12-08 20:09:34.758415] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:02.898 [2024-12-08 20:09:34.758446] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cc80 00:14:02.898 [2024-12-08 20:09:34.758455] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:02.898 [2024-12-08 20:09:34.758898] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:02.898 [2024-12-08 20:09:34.758916] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:02.898 [2024-12-08 20:09:34.759011] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:14:02.898 [2024-12-08 20:09:34.759025] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:14:02.898 [2024-12-08 20:09:34.759034] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:02.898 [2024-12-08 20:09:34.759048] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:14:02.898 BaseBdev1 00:14:02.898 20:09:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.898 20:09:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:14:03.832 20:09:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:03.832 20:09:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:03.832 20:09:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:03.832 20:09:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:03.832 20:09:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:03.832 20:09:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:03.832 20:09:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:03.832 20:09:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:03.832 20:09:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:03.832 20:09:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:03.832 20:09:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:03.832 20:09:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:03.832 20:09:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.832 20:09:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:03.832 20:09:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.091 20:09:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:04.091 "name": "raid_bdev1", 00:14:04.091 "uuid": "87d92a2c-ac83-4f88-9366-ecd7981d9127", 00:14:04.091 "strip_size_kb": 0, 00:14:04.091 "state": "online", 00:14:04.091 "raid_level": "raid1", 00:14:04.091 "superblock": true, 00:14:04.091 "num_base_bdevs": 4, 00:14:04.091 "num_base_bdevs_discovered": 2, 00:14:04.091 "num_base_bdevs_operational": 2, 00:14:04.091 "base_bdevs_list": [ 00:14:04.091 { 00:14:04.091 "name": null, 00:14:04.091 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:04.091 "is_configured": false, 00:14:04.091 "data_offset": 0, 00:14:04.091 "data_size": 63488 00:14:04.091 }, 00:14:04.091 { 00:14:04.091 "name": null, 00:14:04.091 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:04.091 "is_configured": false, 00:14:04.091 "data_offset": 2048, 00:14:04.091 "data_size": 63488 00:14:04.091 }, 00:14:04.091 { 00:14:04.091 "name": "BaseBdev3", 00:14:04.091 "uuid": "8eb38d9c-af97-5b7f-86d5-d0c0b91998b4", 00:14:04.091 "is_configured": true, 00:14:04.091 "data_offset": 2048, 00:14:04.091 "data_size": 63488 00:14:04.091 }, 00:14:04.091 { 00:14:04.091 "name": "BaseBdev4", 00:14:04.091 "uuid": "9e9f6747-17e5-50e0-8588-7672e13c9f8c", 00:14:04.091 "is_configured": true, 00:14:04.091 "data_offset": 2048, 00:14:04.091 "data_size": 63488 00:14:04.091 } 00:14:04.091 ] 00:14:04.091 }' 00:14:04.091 20:09:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:04.091 20:09:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:04.351 20:09:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:04.351 20:09:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:04.351 20:09:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:04.351 20:09:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:04.351 20:09:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:04.351 20:09:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:04.351 20:09:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:04.351 20:09:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.351 20:09:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:04.351 20:09:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.351 20:09:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:04.351 "name": "raid_bdev1", 00:14:04.351 "uuid": "87d92a2c-ac83-4f88-9366-ecd7981d9127", 00:14:04.351 "strip_size_kb": 0, 00:14:04.351 "state": "online", 00:14:04.351 "raid_level": "raid1", 00:14:04.351 "superblock": true, 00:14:04.351 "num_base_bdevs": 4, 00:14:04.351 "num_base_bdevs_discovered": 2, 00:14:04.351 "num_base_bdevs_operational": 2, 00:14:04.351 "base_bdevs_list": [ 00:14:04.351 { 00:14:04.351 "name": null, 00:14:04.351 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:04.351 "is_configured": false, 00:14:04.351 "data_offset": 0, 00:14:04.351 "data_size": 63488 00:14:04.351 }, 00:14:04.351 { 00:14:04.351 "name": null, 00:14:04.351 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:04.351 "is_configured": false, 00:14:04.351 "data_offset": 2048, 00:14:04.351 "data_size": 63488 00:14:04.351 }, 00:14:04.351 { 00:14:04.351 "name": "BaseBdev3", 00:14:04.351 "uuid": "8eb38d9c-af97-5b7f-86d5-d0c0b91998b4", 00:14:04.351 "is_configured": true, 00:14:04.351 "data_offset": 2048, 00:14:04.351 "data_size": 63488 00:14:04.351 }, 00:14:04.351 { 00:14:04.351 "name": "BaseBdev4", 00:14:04.351 "uuid": "9e9f6747-17e5-50e0-8588-7672e13c9f8c", 00:14:04.351 "is_configured": true, 00:14:04.351 "data_offset": 2048, 00:14:04.351 "data_size": 63488 00:14:04.351 } 00:14:04.351 ] 00:14:04.351 }' 00:14:04.351 20:09:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:04.351 20:09:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:04.351 20:09:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:04.351 20:09:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:04.351 20:09:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:04.351 20:09:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # local es=0 00:14:04.351 20:09:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:04.351 20:09:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:14:04.612 20:09:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:04.612 20:09:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:14:04.612 20:09:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:04.612 20:09:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:04.612 20:09:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.612 20:09:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:04.612 [2024-12-08 20:09:36.332245] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:04.612 [2024-12-08 20:09:36.332413] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:14:04.612 [2024-12-08 20:09:36.332428] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:04.612 request: 00:14:04.612 { 00:14:04.612 "base_bdev": "BaseBdev1", 00:14:04.612 "raid_bdev": "raid_bdev1", 00:14:04.612 "method": "bdev_raid_add_base_bdev", 00:14:04.612 "req_id": 1 00:14:04.612 } 00:14:04.612 Got JSON-RPC error response 00:14:04.612 response: 00:14:04.612 { 00:14:04.612 "code": -22, 00:14:04.612 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:14:04.612 } 00:14:04.612 20:09:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:14:04.612 20:09:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # es=1 00:14:04.612 20:09:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:04.612 20:09:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:04.612 20:09:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:04.612 20:09:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:14:05.551 20:09:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:05.551 20:09:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:05.551 20:09:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:05.551 20:09:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:05.551 20:09:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:05.551 20:09:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:05.551 20:09:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:05.551 20:09:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:05.551 20:09:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:05.551 20:09:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:05.551 20:09:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:05.551 20:09:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:05.551 20:09:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.551 20:09:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:05.551 20:09:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.551 20:09:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:05.551 "name": "raid_bdev1", 00:14:05.551 "uuid": "87d92a2c-ac83-4f88-9366-ecd7981d9127", 00:14:05.551 "strip_size_kb": 0, 00:14:05.551 "state": "online", 00:14:05.551 "raid_level": "raid1", 00:14:05.551 "superblock": true, 00:14:05.551 "num_base_bdevs": 4, 00:14:05.551 "num_base_bdevs_discovered": 2, 00:14:05.551 "num_base_bdevs_operational": 2, 00:14:05.551 "base_bdevs_list": [ 00:14:05.551 { 00:14:05.551 "name": null, 00:14:05.551 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:05.551 "is_configured": false, 00:14:05.551 "data_offset": 0, 00:14:05.551 "data_size": 63488 00:14:05.551 }, 00:14:05.551 { 00:14:05.551 "name": null, 00:14:05.551 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:05.551 "is_configured": false, 00:14:05.551 "data_offset": 2048, 00:14:05.551 "data_size": 63488 00:14:05.551 }, 00:14:05.551 { 00:14:05.551 "name": "BaseBdev3", 00:14:05.551 "uuid": "8eb38d9c-af97-5b7f-86d5-d0c0b91998b4", 00:14:05.551 "is_configured": true, 00:14:05.551 "data_offset": 2048, 00:14:05.551 "data_size": 63488 00:14:05.551 }, 00:14:05.551 { 00:14:05.551 "name": "BaseBdev4", 00:14:05.551 "uuid": "9e9f6747-17e5-50e0-8588-7672e13c9f8c", 00:14:05.551 "is_configured": true, 00:14:05.551 "data_offset": 2048, 00:14:05.551 "data_size": 63488 00:14:05.551 } 00:14:05.551 ] 00:14:05.551 }' 00:14:05.551 20:09:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:05.551 20:09:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:06.120 20:09:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:06.120 20:09:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:06.120 20:09:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:06.120 20:09:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:06.120 20:09:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:06.120 20:09:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.120 20:09:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.120 20:09:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:06.120 20:09:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:06.120 20:09:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.120 20:09:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:06.120 "name": "raid_bdev1", 00:14:06.120 "uuid": "87d92a2c-ac83-4f88-9366-ecd7981d9127", 00:14:06.120 "strip_size_kb": 0, 00:14:06.120 "state": "online", 00:14:06.120 "raid_level": "raid1", 00:14:06.120 "superblock": true, 00:14:06.120 "num_base_bdevs": 4, 00:14:06.120 "num_base_bdevs_discovered": 2, 00:14:06.120 "num_base_bdevs_operational": 2, 00:14:06.120 "base_bdevs_list": [ 00:14:06.120 { 00:14:06.120 "name": null, 00:14:06.120 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:06.120 "is_configured": false, 00:14:06.120 "data_offset": 0, 00:14:06.120 "data_size": 63488 00:14:06.120 }, 00:14:06.120 { 00:14:06.120 "name": null, 00:14:06.120 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:06.120 "is_configured": false, 00:14:06.120 "data_offset": 2048, 00:14:06.120 "data_size": 63488 00:14:06.120 }, 00:14:06.120 { 00:14:06.120 "name": "BaseBdev3", 00:14:06.120 "uuid": "8eb38d9c-af97-5b7f-86d5-d0c0b91998b4", 00:14:06.120 "is_configured": true, 00:14:06.120 "data_offset": 2048, 00:14:06.120 "data_size": 63488 00:14:06.120 }, 00:14:06.120 { 00:14:06.120 "name": "BaseBdev4", 00:14:06.120 "uuid": "9e9f6747-17e5-50e0-8588-7672e13c9f8c", 00:14:06.120 "is_configured": true, 00:14:06.120 "data_offset": 2048, 00:14:06.120 "data_size": 63488 00:14:06.120 } 00:14:06.120 ] 00:14:06.120 }' 00:14:06.120 20:09:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:06.121 20:09:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:06.121 20:09:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:06.121 20:09:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:06.121 20:09:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 78868 00:14:06.121 20:09:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # '[' -z 78868 ']' 00:14:06.121 20:09:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # kill -0 78868 00:14:06.121 20:09:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # uname 00:14:06.121 20:09:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:06.121 20:09:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78868 00:14:06.121 20:09:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:06.121 20:09:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:06.121 20:09:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78868' 00:14:06.121 killing process with pid 78868 00:14:06.121 Received shutdown signal, test time was about 17.797041 seconds 00:14:06.121 00:14:06.121 Latency(us) 00:14:06.121 [2024-12-08T20:09:38.099Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:06.121 [2024-12-08T20:09:38.099Z] =================================================================================================================== 00:14:06.121 [2024-12-08T20:09:38.099Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:06.121 20:09:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@973 -- # kill 78868 00:14:06.121 [2024-12-08 20:09:37.976392] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:06.121 [2024-12-08 20:09:37.976516] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:06.121 20:09:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@978 -- # wait 78868 00:14:06.121 [2024-12-08 20:09:37.976586] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:06.121 [2024-12-08 20:09:37.976598] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:14:06.690 [2024-12-08 20:09:38.371910] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:07.630 20:09:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:14:07.630 00:14:07.630 real 0m21.120s 00:14:07.630 user 0m27.551s 00:14:07.630 sys 0m2.486s 00:14:07.630 20:09:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:07.630 20:09:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:07.630 ************************************ 00:14:07.630 END TEST raid_rebuild_test_sb_io 00:14:07.630 ************************************ 00:14:07.630 20:09:39 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:14:07.630 20:09:39 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:14:07.630 20:09:39 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:07.630 20:09:39 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:07.630 20:09:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:07.630 ************************************ 00:14:07.630 START TEST raid5f_state_function_test 00:14:07.630 ************************************ 00:14:07.630 20:09:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 3 false 00:14:07.630 20:09:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:14:07.630 20:09:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:14:07.630 20:09:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:14:07.630 20:09:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:07.630 20:09:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:07.630 20:09:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:07.630 20:09:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:07.630 20:09:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:07.630 20:09:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:07.630 20:09:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:07.630 20:09:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:07.630 20:09:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:07.630 20:09:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:14:07.630 20:09:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:07.630 20:09:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:07.630 20:09:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:07.630 20:09:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:07.630 20:09:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:07.630 20:09:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:07.630 20:09:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:07.630 20:09:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:07.630 20:09:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:14:07.630 20:09:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:14:07.630 20:09:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:14:07.630 20:09:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:14:07.630 20:09:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:14:07.630 Process raid pid: 79592 00:14:07.630 20:09:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=79592 00:14:07.630 20:09:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:07.630 20:09:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 79592' 00:14:07.630 20:09:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 79592 00:14:07.630 20:09:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 79592 ']' 00:14:07.630 20:09:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:07.630 20:09:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:07.630 20:09:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:07.630 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:07.630 20:09:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:07.630 20:09:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.891 [2024-12-08 20:09:39.657141] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:14:07.891 [2024-12-08 20:09:39.657345] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:07.891 [2024-12-08 20:09:39.828606] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:08.151 [2024-12-08 20:09:39.939647] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:08.410 [2024-12-08 20:09:40.133900] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:08.410 [2024-12-08 20:09:40.134038] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:08.672 20:09:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:08.672 20:09:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:14:08.672 20:09:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:08.672 20:09:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.672 20:09:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.672 [2024-12-08 20:09:40.479491] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:08.672 [2024-12-08 20:09:40.479586] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:08.672 [2024-12-08 20:09:40.479602] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:08.672 [2024-12-08 20:09:40.479612] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:08.672 [2024-12-08 20:09:40.479619] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:08.672 [2024-12-08 20:09:40.479627] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:08.672 20:09:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.672 20:09:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:08.672 20:09:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:08.672 20:09:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:08.672 20:09:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:08.672 20:09:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:08.672 20:09:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:08.672 20:09:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:08.672 20:09:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:08.672 20:09:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:08.672 20:09:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:08.672 20:09:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:08.672 20:09:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:08.672 20:09:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.672 20:09:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.672 20:09:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.672 20:09:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:08.672 "name": "Existed_Raid", 00:14:08.672 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:08.672 "strip_size_kb": 64, 00:14:08.672 "state": "configuring", 00:14:08.672 "raid_level": "raid5f", 00:14:08.672 "superblock": false, 00:14:08.672 "num_base_bdevs": 3, 00:14:08.672 "num_base_bdevs_discovered": 0, 00:14:08.672 "num_base_bdevs_operational": 3, 00:14:08.672 "base_bdevs_list": [ 00:14:08.672 { 00:14:08.672 "name": "BaseBdev1", 00:14:08.672 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:08.672 "is_configured": false, 00:14:08.672 "data_offset": 0, 00:14:08.672 "data_size": 0 00:14:08.672 }, 00:14:08.672 { 00:14:08.672 "name": "BaseBdev2", 00:14:08.672 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:08.672 "is_configured": false, 00:14:08.672 "data_offset": 0, 00:14:08.672 "data_size": 0 00:14:08.672 }, 00:14:08.672 { 00:14:08.672 "name": "BaseBdev3", 00:14:08.672 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:08.672 "is_configured": false, 00:14:08.672 "data_offset": 0, 00:14:08.672 "data_size": 0 00:14:08.672 } 00:14:08.672 ] 00:14:08.672 }' 00:14:08.672 20:09:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:08.672 20:09:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.931 20:09:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:08.931 20:09:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.931 20:09:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.191 [2024-12-08 20:09:40.910664] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:09.191 [2024-12-08 20:09:40.910735] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:14:09.191 20:09:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.191 20:09:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:09.191 20:09:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.191 20:09:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.191 [2024-12-08 20:09:40.922649] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:09.191 [2024-12-08 20:09:40.922740] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:09.191 [2024-12-08 20:09:40.922767] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:09.191 [2024-12-08 20:09:40.922789] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:09.191 [2024-12-08 20:09:40.922807] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:09.191 [2024-12-08 20:09:40.922828] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:09.191 20:09:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.191 20:09:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:09.191 20:09:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.191 20:09:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.191 [2024-12-08 20:09:40.967745] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:09.191 BaseBdev1 00:14:09.191 20:09:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.191 20:09:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:09.191 20:09:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:09.191 20:09:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:09.191 20:09:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:09.191 20:09:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:09.191 20:09:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:09.191 20:09:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:09.191 20:09:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.191 20:09:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.191 20:09:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.192 20:09:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:09.192 20:09:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.192 20:09:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.192 [ 00:14:09.192 { 00:14:09.192 "name": "BaseBdev1", 00:14:09.192 "aliases": [ 00:14:09.192 "d894298d-b26e-46b7-b8a3-df59119281c0" 00:14:09.192 ], 00:14:09.192 "product_name": "Malloc disk", 00:14:09.192 "block_size": 512, 00:14:09.192 "num_blocks": 65536, 00:14:09.192 "uuid": "d894298d-b26e-46b7-b8a3-df59119281c0", 00:14:09.192 "assigned_rate_limits": { 00:14:09.192 "rw_ios_per_sec": 0, 00:14:09.192 "rw_mbytes_per_sec": 0, 00:14:09.192 "r_mbytes_per_sec": 0, 00:14:09.192 "w_mbytes_per_sec": 0 00:14:09.192 }, 00:14:09.192 "claimed": true, 00:14:09.192 "claim_type": "exclusive_write", 00:14:09.192 "zoned": false, 00:14:09.192 "supported_io_types": { 00:14:09.192 "read": true, 00:14:09.192 "write": true, 00:14:09.192 "unmap": true, 00:14:09.192 "flush": true, 00:14:09.192 "reset": true, 00:14:09.192 "nvme_admin": false, 00:14:09.192 "nvme_io": false, 00:14:09.192 "nvme_io_md": false, 00:14:09.192 "write_zeroes": true, 00:14:09.192 "zcopy": true, 00:14:09.192 "get_zone_info": false, 00:14:09.192 "zone_management": false, 00:14:09.192 "zone_append": false, 00:14:09.192 "compare": false, 00:14:09.192 "compare_and_write": false, 00:14:09.192 "abort": true, 00:14:09.192 "seek_hole": false, 00:14:09.192 "seek_data": false, 00:14:09.192 "copy": true, 00:14:09.192 "nvme_iov_md": false 00:14:09.192 }, 00:14:09.192 "memory_domains": [ 00:14:09.192 { 00:14:09.192 "dma_device_id": "system", 00:14:09.192 "dma_device_type": 1 00:14:09.192 }, 00:14:09.192 { 00:14:09.192 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:09.192 "dma_device_type": 2 00:14:09.192 } 00:14:09.192 ], 00:14:09.192 "driver_specific": {} 00:14:09.192 } 00:14:09.192 ] 00:14:09.192 20:09:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.192 20:09:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:09.192 20:09:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:09.192 20:09:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:09.192 20:09:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:09.192 20:09:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:09.192 20:09:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:09.192 20:09:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:09.192 20:09:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:09.192 20:09:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:09.192 20:09:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:09.192 20:09:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:09.192 20:09:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:09.192 20:09:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:09.192 20:09:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.192 20:09:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.192 20:09:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.192 20:09:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:09.192 "name": "Existed_Raid", 00:14:09.192 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:09.192 "strip_size_kb": 64, 00:14:09.192 "state": "configuring", 00:14:09.192 "raid_level": "raid5f", 00:14:09.192 "superblock": false, 00:14:09.192 "num_base_bdevs": 3, 00:14:09.192 "num_base_bdevs_discovered": 1, 00:14:09.192 "num_base_bdevs_operational": 3, 00:14:09.192 "base_bdevs_list": [ 00:14:09.192 { 00:14:09.192 "name": "BaseBdev1", 00:14:09.192 "uuid": "d894298d-b26e-46b7-b8a3-df59119281c0", 00:14:09.192 "is_configured": true, 00:14:09.192 "data_offset": 0, 00:14:09.192 "data_size": 65536 00:14:09.192 }, 00:14:09.192 { 00:14:09.192 "name": "BaseBdev2", 00:14:09.192 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:09.192 "is_configured": false, 00:14:09.192 "data_offset": 0, 00:14:09.192 "data_size": 0 00:14:09.192 }, 00:14:09.192 { 00:14:09.192 "name": "BaseBdev3", 00:14:09.192 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:09.192 "is_configured": false, 00:14:09.192 "data_offset": 0, 00:14:09.192 "data_size": 0 00:14:09.192 } 00:14:09.192 ] 00:14:09.192 }' 00:14:09.192 20:09:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:09.192 20:09:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.452 20:09:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:09.452 20:09:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.452 20:09:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.452 [2024-12-08 20:09:41.415071] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:09.452 [2024-12-08 20:09:41.415158] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:14:09.452 20:09:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.452 20:09:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:09.452 20:09:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.452 20:09:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.452 [2024-12-08 20:09:41.427105] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:09.711 [2024-12-08 20:09:41.428938] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:09.712 [2024-12-08 20:09:41.428984] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:09.712 [2024-12-08 20:09:41.428994] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:09.712 [2024-12-08 20:09:41.429002] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:09.712 20:09:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.712 20:09:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:09.712 20:09:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:09.712 20:09:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:09.712 20:09:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:09.712 20:09:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:09.712 20:09:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:09.712 20:09:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:09.712 20:09:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:09.712 20:09:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:09.712 20:09:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:09.712 20:09:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:09.712 20:09:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:09.712 20:09:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:09.712 20:09:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.712 20:09:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:09.712 20:09:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.712 20:09:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.712 20:09:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:09.712 "name": "Existed_Raid", 00:14:09.712 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:09.712 "strip_size_kb": 64, 00:14:09.712 "state": "configuring", 00:14:09.712 "raid_level": "raid5f", 00:14:09.712 "superblock": false, 00:14:09.712 "num_base_bdevs": 3, 00:14:09.712 "num_base_bdevs_discovered": 1, 00:14:09.712 "num_base_bdevs_operational": 3, 00:14:09.712 "base_bdevs_list": [ 00:14:09.712 { 00:14:09.712 "name": "BaseBdev1", 00:14:09.712 "uuid": "d894298d-b26e-46b7-b8a3-df59119281c0", 00:14:09.712 "is_configured": true, 00:14:09.712 "data_offset": 0, 00:14:09.712 "data_size": 65536 00:14:09.712 }, 00:14:09.712 { 00:14:09.712 "name": "BaseBdev2", 00:14:09.712 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:09.712 "is_configured": false, 00:14:09.712 "data_offset": 0, 00:14:09.712 "data_size": 0 00:14:09.712 }, 00:14:09.712 { 00:14:09.712 "name": "BaseBdev3", 00:14:09.712 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:09.712 "is_configured": false, 00:14:09.712 "data_offset": 0, 00:14:09.712 "data_size": 0 00:14:09.712 } 00:14:09.712 ] 00:14:09.712 }' 00:14:09.712 20:09:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:09.712 20:09:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.972 20:09:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:09.972 20:09:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.972 20:09:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.972 [2024-12-08 20:09:41.891150] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:09.972 BaseBdev2 00:14:09.972 20:09:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.972 20:09:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:09.972 20:09:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:09.972 20:09:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:09.972 20:09:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:09.972 20:09:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:09.972 20:09:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:09.972 20:09:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:09.972 20:09:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.972 20:09:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.972 20:09:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.972 20:09:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:09.972 20:09:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.972 20:09:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.972 [ 00:14:09.972 { 00:14:09.972 "name": "BaseBdev2", 00:14:09.972 "aliases": [ 00:14:09.972 "2438b37d-cbbb-483f-82d6-3077c3a0de59" 00:14:09.972 ], 00:14:09.972 "product_name": "Malloc disk", 00:14:09.972 "block_size": 512, 00:14:09.972 "num_blocks": 65536, 00:14:09.972 "uuid": "2438b37d-cbbb-483f-82d6-3077c3a0de59", 00:14:09.972 "assigned_rate_limits": { 00:14:09.972 "rw_ios_per_sec": 0, 00:14:09.972 "rw_mbytes_per_sec": 0, 00:14:09.972 "r_mbytes_per_sec": 0, 00:14:09.972 "w_mbytes_per_sec": 0 00:14:09.972 }, 00:14:09.972 "claimed": true, 00:14:09.972 "claim_type": "exclusive_write", 00:14:09.972 "zoned": false, 00:14:09.972 "supported_io_types": { 00:14:09.972 "read": true, 00:14:09.972 "write": true, 00:14:09.972 "unmap": true, 00:14:09.972 "flush": true, 00:14:09.972 "reset": true, 00:14:09.972 "nvme_admin": false, 00:14:09.972 "nvme_io": false, 00:14:09.972 "nvme_io_md": false, 00:14:09.972 "write_zeroes": true, 00:14:09.972 "zcopy": true, 00:14:09.972 "get_zone_info": false, 00:14:09.972 "zone_management": false, 00:14:09.972 "zone_append": false, 00:14:09.972 "compare": false, 00:14:09.972 "compare_and_write": false, 00:14:09.972 "abort": true, 00:14:09.972 "seek_hole": false, 00:14:09.972 "seek_data": false, 00:14:09.972 "copy": true, 00:14:09.972 "nvme_iov_md": false 00:14:09.972 }, 00:14:09.972 "memory_domains": [ 00:14:09.972 { 00:14:09.972 "dma_device_id": "system", 00:14:09.972 "dma_device_type": 1 00:14:09.972 }, 00:14:09.972 { 00:14:09.972 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:09.972 "dma_device_type": 2 00:14:09.973 } 00:14:09.973 ], 00:14:09.973 "driver_specific": {} 00:14:09.973 } 00:14:09.973 ] 00:14:09.973 20:09:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.973 20:09:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:09.973 20:09:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:09.973 20:09:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:09.973 20:09:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:09.973 20:09:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:09.973 20:09:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:09.973 20:09:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:09.973 20:09:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:09.973 20:09:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:09.973 20:09:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:09.973 20:09:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:09.973 20:09:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:09.973 20:09:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:09.973 20:09:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:09.973 20:09:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:09.973 20:09:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.973 20:09:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.233 20:09:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.233 20:09:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:10.233 "name": "Existed_Raid", 00:14:10.233 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:10.233 "strip_size_kb": 64, 00:14:10.233 "state": "configuring", 00:14:10.233 "raid_level": "raid5f", 00:14:10.233 "superblock": false, 00:14:10.233 "num_base_bdevs": 3, 00:14:10.233 "num_base_bdevs_discovered": 2, 00:14:10.233 "num_base_bdevs_operational": 3, 00:14:10.233 "base_bdevs_list": [ 00:14:10.233 { 00:14:10.233 "name": "BaseBdev1", 00:14:10.233 "uuid": "d894298d-b26e-46b7-b8a3-df59119281c0", 00:14:10.233 "is_configured": true, 00:14:10.233 "data_offset": 0, 00:14:10.233 "data_size": 65536 00:14:10.233 }, 00:14:10.233 { 00:14:10.233 "name": "BaseBdev2", 00:14:10.233 "uuid": "2438b37d-cbbb-483f-82d6-3077c3a0de59", 00:14:10.233 "is_configured": true, 00:14:10.233 "data_offset": 0, 00:14:10.233 "data_size": 65536 00:14:10.233 }, 00:14:10.233 { 00:14:10.233 "name": "BaseBdev3", 00:14:10.233 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:10.233 "is_configured": false, 00:14:10.233 "data_offset": 0, 00:14:10.233 "data_size": 0 00:14:10.233 } 00:14:10.233 ] 00:14:10.233 }' 00:14:10.233 20:09:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:10.233 20:09:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.493 20:09:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:10.493 20:09:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.493 20:09:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.493 [2024-12-08 20:09:42.378505] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:10.493 [2024-12-08 20:09:42.378570] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:10.493 [2024-12-08 20:09:42.378586] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:14:10.493 [2024-12-08 20:09:42.378837] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:10.493 [2024-12-08 20:09:42.384306] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:10.493 [2024-12-08 20:09:42.384362] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:14:10.493 [2024-12-08 20:09:42.384718] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:10.493 BaseBdev3 00:14:10.493 20:09:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.493 20:09:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:14:10.493 20:09:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:10.493 20:09:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:10.493 20:09:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:10.493 20:09:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:10.493 20:09:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:10.493 20:09:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:10.493 20:09:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.493 20:09:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.493 20:09:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.493 20:09:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:10.493 20:09:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.493 20:09:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.493 [ 00:14:10.493 { 00:14:10.493 "name": "BaseBdev3", 00:14:10.493 "aliases": [ 00:14:10.493 "c5bbc435-7e2d-4475-bc38-5cf0fd5916d6" 00:14:10.493 ], 00:14:10.493 "product_name": "Malloc disk", 00:14:10.493 "block_size": 512, 00:14:10.493 "num_blocks": 65536, 00:14:10.493 "uuid": "c5bbc435-7e2d-4475-bc38-5cf0fd5916d6", 00:14:10.493 "assigned_rate_limits": { 00:14:10.493 "rw_ios_per_sec": 0, 00:14:10.493 "rw_mbytes_per_sec": 0, 00:14:10.493 "r_mbytes_per_sec": 0, 00:14:10.493 "w_mbytes_per_sec": 0 00:14:10.493 }, 00:14:10.493 "claimed": true, 00:14:10.493 "claim_type": "exclusive_write", 00:14:10.493 "zoned": false, 00:14:10.493 "supported_io_types": { 00:14:10.493 "read": true, 00:14:10.493 "write": true, 00:14:10.493 "unmap": true, 00:14:10.493 "flush": true, 00:14:10.493 "reset": true, 00:14:10.493 "nvme_admin": false, 00:14:10.493 "nvme_io": false, 00:14:10.493 "nvme_io_md": false, 00:14:10.493 "write_zeroes": true, 00:14:10.493 "zcopy": true, 00:14:10.493 "get_zone_info": false, 00:14:10.493 "zone_management": false, 00:14:10.493 "zone_append": false, 00:14:10.493 "compare": false, 00:14:10.493 "compare_and_write": false, 00:14:10.493 "abort": true, 00:14:10.493 "seek_hole": false, 00:14:10.493 "seek_data": false, 00:14:10.493 "copy": true, 00:14:10.493 "nvme_iov_md": false 00:14:10.493 }, 00:14:10.493 "memory_domains": [ 00:14:10.493 { 00:14:10.493 "dma_device_id": "system", 00:14:10.493 "dma_device_type": 1 00:14:10.493 }, 00:14:10.493 { 00:14:10.493 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:10.493 "dma_device_type": 2 00:14:10.493 } 00:14:10.493 ], 00:14:10.493 "driver_specific": {} 00:14:10.493 } 00:14:10.493 ] 00:14:10.493 20:09:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.493 20:09:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:10.493 20:09:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:10.494 20:09:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:10.494 20:09:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:14:10.494 20:09:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:10.494 20:09:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:10.494 20:09:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:10.494 20:09:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:10.494 20:09:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:10.494 20:09:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:10.494 20:09:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:10.494 20:09:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:10.494 20:09:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:10.494 20:09:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:10.494 20:09:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:10.494 20:09:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.494 20:09:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.494 20:09:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.754 20:09:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:10.754 "name": "Existed_Raid", 00:14:10.754 "uuid": "67af9ef3-8b72-41c7-84ca-13ff9f9a2deb", 00:14:10.754 "strip_size_kb": 64, 00:14:10.754 "state": "online", 00:14:10.754 "raid_level": "raid5f", 00:14:10.754 "superblock": false, 00:14:10.754 "num_base_bdevs": 3, 00:14:10.754 "num_base_bdevs_discovered": 3, 00:14:10.754 "num_base_bdevs_operational": 3, 00:14:10.754 "base_bdevs_list": [ 00:14:10.754 { 00:14:10.754 "name": "BaseBdev1", 00:14:10.754 "uuid": "d894298d-b26e-46b7-b8a3-df59119281c0", 00:14:10.754 "is_configured": true, 00:14:10.754 "data_offset": 0, 00:14:10.754 "data_size": 65536 00:14:10.754 }, 00:14:10.754 { 00:14:10.754 "name": "BaseBdev2", 00:14:10.754 "uuid": "2438b37d-cbbb-483f-82d6-3077c3a0de59", 00:14:10.754 "is_configured": true, 00:14:10.754 "data_offset": 0, 00:14:10.754 "data_size": 65536 00:14:10.754 }, 00:14:10.754 { 00:14:10.754 "name": "BaseBdev3", 00:14:10.754 "uuid": "c5bbc435-7e2d-4475-bc38-5cf0fd5916d6", 00:14:10.754 "is_configured": true, 00:14:10.754 "data_offset": 0, 00:14:10.754 "data_size": 65536 00:14:10.754 } 00:14:10.754 ] 00:14:10.754 }' 00:14:10.754 20:09:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:10.754 20:09:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.014 20:09:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:11.014 20:09:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:11.014 20:09:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:11.014 20:09:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:11.014 20:09:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:11.014 20:09:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:11.014 20:09:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:11.014 20:09:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:11.014 20:09:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.014 20:09:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.014 [2024-12-08 20:09:42.866709] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:11.014 20:09:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.014 20:09:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:11.014 "name": "Existed_Raid", 00:14:11.014 "aliases": [ 00:14:11.014 "67af9ef3-8b72-41c7-84ca-13ff9f9a2deb" 00:14:11.014 ], 00:14:11.014 "product_name": "Raid Volume", 00:14:11.014 "block_size": 512, 00:14:11.014 "num_blocks": 131072, 00:14:11.014 "uuid": "67af9ef3-8b72-41c7-84ca-13ff9f9a2deb", 00:14:11.014 "assigned_rate_limits": { 00:14:11.014 "rw_ios_per_sec": 0, 00:14:11.014 "rw_mbytes_per_sec": 0, 00:14:11.014 "r_mbytes_per_sec": 0, 00:14:11.014 "w_mbytes_per_sec": 0 00:14:11.014 }, 00:14:11.014 "claimed": false, 00:14:11.014 "zoned": false, 00:14:11.014 "supported_io_types": { 00:14:11.014 "read": true, 00:14:11.014 "write": true, 00:14:11.014 "unmap": false, 00:14:11.014 "flush": false, 00:14:11.014 "reset": true, 00:14:11.014 "nvme_admin": false, 00:14:11.014 "nvme_io": false, 00:14:11.014 "nvme_io_md": false, 00:14:11.014 "write_zeroes": true, 00:14:11.014 "zcopy": false, 00:14:11.014 "get_zone_info": false, 00:14:11.014 "zone_management": false, 00:14:11.014 "zone_append": false, 00:14:11.014 "compare": false, 00:14:11.014 "compare_and_write": false, 00:14:11.014 "abort": false, 00:14:11.014 "seek_hole": false, 00:14:11.014 "seek_data": false, 00:14:11.014 "copy": false, 00:14:11.014 "nvme_iov_md": false 00:14:11.014 }, 00:14:11.014 "driver_specific": { 00:14:11.014 "raid": { 00:14:11.014 "uuid": "67af9ef3-8b72-41c7-84ca-13ff9f9a2deb", 00:14:11.014 "strip_size_kb": 64, 00:14:11.014 "state": "online", 00:14:11.014 "raid_level": "raid5f", 00:14:11.014 "superblock": false, 00:14:11.014 "num_base_bdevs": 3, 00:14:11.014 "num_base_bdevs_discovered": 3, 00:14:11.014 "num_base_bdevs_operational": 3, 00:14:11.014 "base_bdevs_list": [ 00:14:11.014 { 00:14:11.014 "name": "BaseBdev1", 00:14:11.014 "uuid": "d894298d-b26e-46b7-b8a3-df59119281c0", 00:14:11.014 "is_configured": true, 00:14:11.014 "data_offset": 0, 00:14:11.014 "data_size": 65536 00:14:11.014 }, 00:14:11.014 { 00:14:11.014 "name": "BaseBdev2", 00:14:11.014 "uuid": "2438b37d-cbbb-483f-82d6-3077c3a0de59", 00:14:11.014 "is_configured": true, 00:14:11.014 "data_offset": 0, 00:14:11.014 "data_size": 65536 00:14:11.014 }, 00:14:11.014 { 00:14:11.014 "name": "BaseBdev3", 00:14:11.014 "uuid": "c5bbc435-7e2d-4475-bc38-5cf0fd5916d6", 00:14:11.014 "is_configured": true, 00:14:11.014 "data_offset": 0, 00:14:11.014 "data_size": 65536 00:14:11.014 } 00:14:11.014 ] 00:14:11.014 } 00:14:11.014 } 00:14:11.014 }' 00:14:11.014 20:09:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:11.014 20:09:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:11.014 BaseBdev2 00:14:11.014 BaseBdev3' 00:14:11.014 20:09:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:11.014 20:09:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:11.014 20:09:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:11.275 20:09:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:11.275 20:09:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:11.275 20:09:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.275 20:09:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.275 20:09:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.275 20:09:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:11.275 20:09:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:11.275 20:09:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:11.275 20:09:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:11.275 20:09:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.275 20:09:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.275 20:09:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:11.275 20:09:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.275 20:09:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:11.275 20:09:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:11.275 20:09:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:11.275 20:09:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:11.275 20:09:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.275 20:09:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.275 20:09:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:11.275 20:09:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.275 20:09:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:11.275 20:09:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:11.275 20:09:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:11.275 20:09:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.275 20:09:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.275 [2024-12-08 20:09:43.118126] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:11.275 20:09:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.275 20:09:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:11.275 20:09:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:14:11.275 20:09:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:11.275 20:09:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:14:11.275 20:09:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:14:11.275 20:09:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:14:11.275 20:09:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:11.275 20:09:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:11.275 20:09:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:11.275 20:09:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:11.275 20:09:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:11.275 20:09:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:11.275 20:09:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:11.275 20:09:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:11.275 20:09:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:11.275 20:09:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:11.275 20:09:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:11.275 20:09:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.275 20:09:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.275 20:09:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.535 20:09:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:11.535 "name": "Existed_Raid", 00:14:11.535 "uuid": "67af9ef3-8b72-41c7-84ca-13ff9f9a2deb", 00:14:11.535 "strip_size_kb": 64, 00:14:11.535 "state": "online", 00:14:11.535 "raid_level": "raid5f", 00:14:11.535 "superblock": false, 00:14:11.535 "num_base_bdevs": 3, 00:14:11.535 "num_base_bdevs_discovered": 2, 00:14:11.535 "num_base_bdevs_operational": 2, 00:14:11.535 "base_bdevs_list": [ 00:14:11.535 { 00:14:11.535 "name": null, 00:14:11.535 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:11.535 "is_configured": false, 00:14:11.535 "data_offset": 0, 00:14:11.535 "data_size": 65536 00:14:11.535 }, 00:14:11.535 { 00:14:11.535 "name": "BaseBdev2", 00:14:11.535 "uuid": "2438b37d-cbbb-483f-82d6-3077c3a0de59", 00:14:11.535 "is_configured": true, 00:14:11.535 "data_offset": 0, 00:14:11.535 "data_size": 65536 00:14:11.535 }, 00:14:11.535 { 00:14:11.535 "name": "BaseBdev3", 00:14:11.535 "uuid": "c5bbc435-7e2d-4475-bc38-5cf0fd5916d6", 00:14:11.535 "is_configured": true, 00:14:11.535 "data_offset": 0, 00:14:11.535 "data_size": 65536 00:14:11.535 } 00:14:11.535 ] 00:14:11.535 }' 00:14:11.535 20:09:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:11.535 20:09:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.795 20:09:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:11.795 20:09:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:11.795 20:09:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:11.795 20:09:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:11.795 20:09:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.795 20:09:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.795 20:09:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.795 20:09:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:11.795 20:09:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:11.795 20:09:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:11.795 20:09:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.795 20:09:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.795 [2024-12-08 20:09:43.638100] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:11.795 [2024-12-08 20:09:43.638194] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:11.795 [2024-12-08 20:09:43.727242] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:11.796 20:09:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.796 20:09:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:11.796 20:09:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:11.796 20:09:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:11.796 20:09:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:11.796 20:09:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.796 20:09:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.796 20:09:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.056 20:09:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:12.056 20:09:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:12.056 20:09:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:14:12.056 20:09:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.056 20:09:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.056 [2024-12-08 20:09:43.787145] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:12.056 [2024-12-08 20:09:43.787228] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:14:12.056 20:09:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.056 20:09:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:12.056 20:09:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:12.056 20:09:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:12.056 20:09:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:12.056 20:09:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.056 20:09:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.056 20:09:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.056 20:09:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:12.056 20:09:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:12.056 20:09:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:14:12.056 20:09:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:14:12.056 20:09:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:12.056 20:09:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:12.056 20:09:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.056 20:09:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.056 BaseBdev2 00:14:12.056 20:09:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.056 20:09:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:14:12.056 20:09:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:12.056 20:09:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:12.056 20:09:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:12.056 20:09:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:12.056 20:09:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:12.056 20:09:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:12.056 20:09:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.056 20:09:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.056 20:09:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.056 20:09:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:12.056 20:09:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.056 20:09:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.056 [ 00:14:12.056 { 00:14:12.056 "name": "BaseBdev2", 00:14:12.056 "aliases": [ 00:14:12.056 "ac095b2d-d135-4590-a769-156aa6cf7308" 00:14:12.056 ], 00:14:12.056 "product_name": "Malloc disk", 00:14:12.056 "block_size": 512, 00:14:12.056 "num_blocks": 65536, 00:14:12.056 "uuid": "ac095b2d-d135-4590-a769-156aa6cf7308", 00:14:12.056 "assigned_rate_limits": { 00:14:12.056 "rw_ios_per_sec": 0, 00:14:12.056 "rw_mbytes_per_sec": 0, 00:14:12.056 "r_mbytes_per_sec": 0, 00:14:12.056 "w_mbytes_per_sec": 0 00:14:12.057 }, 00:14:12.057 "claimed": false, 00:14:12.057 "zoned": false, 00:14:12.057 "supported_io_types": { 00:14:12.057 "read": true, 00:14:12.057 "write": true, 00:14:12.057 "unmap": true, 00:14:12.057 "flush": true, 00:14:12.057 "reset": true, 00:14:12.057 "nvme_admin": false, 00:14:12.057 "nvme_io": false, 00:14:12.057 "nvme_io_md": false, 00:14:12.057 "write_zeroes": true, 00:14:12.057 "zcopy": true, 00:14:12.057 "get_zone_info": false, 00:14:12.057 "zone_management": false, 00:14:12.057 "zone_append": false, 00:14:12.057 "compare": false, 00:14:12.057 "compare_and_write": false, 00:14:12.057 "abort": true, 00:14:12.057 "seek_hole": false, 00:14:12.057 "seek_data": false, 00:14:12.057 "copy": true, 00:14:12.057 "nvme_iov_md": false 00:14:12.057 }, 00:14:12.057 "memory_domains": [ 00:14:12.057 { 00:14:12.057 "dma_device_id": "system", 00:14:12.057 "dma_device_type": 1 00:14:12.057 }, 00:14:12.057 { 00:14:12.057 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:12.057 "dma_device_type": 2 00:14:12.057 } 00:14:12.057 ], 00:14:12.057 "driver_specific": {} 00:14:12.057 } 00:14:12.057 ] 00:14:12.057 20:09:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.057 20:09:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:12.057 20:09:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:12.057 20:09:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:12.057 20:09:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:12.057 20:09:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.057 20:09:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.317 BaseBdev3 00:14:12.317 20:09:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.317 20:09:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:14:12.317 20:09:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:12.317 20:09:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:12.317 20:09:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:12.317 20:09:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:12.317 20:09:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:12.317 20:09:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:12.317 20:09:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.317 20:09:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.317 20:09:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.317 20:09:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:12.317 20:09:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.317 20:09:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.317 [ 00:14:12.317 { 00:14:12.317 "name": "BaseBdev3", 00:14:12.317 "aliases": [ 00:14:12.317 "f7119e09-71fc-4aca-a6d7-343cf42afee2" 00:14:12.317 ], 00:14:12.317 "product_name": "Malloc disk", 00:14:12.317 "block_size": 512, 00:14:12.317 "num_blocks": 65536, 00:14:12.317 "uuid": "f7119e09-71fc-4aca-a6d7-343cf42afee2", 00:14:12.317 "assigned_rate_limits": { 00:14:12.317 "rw_ios_per_sec": 0, 00:14:12.317 "rw_mbytes_per_sec": 0, 00:14:12.317 "r_mbytes_per_sec": 0, 00:14:12.317 "w_mbytes_per_sec": 0 00:14:12.317 }, 00:14:12.317 "claimed": false, 00:14:12.317 "zoned": false, 00:14:12.317 "supported_io_types": { 00:14:12.317 "read": true, 00:14:12.317 "write": true, 00:14:12.317 "unmap": true, 00:14:12.317 "flush": true, 00:14:12.317 "reset": true, 00:14:12.317 "nvme_admin": false, 00:14:12.317 "nvme_io": false, 00:14:12.317 "nvme_io_md": false, 00:14:12.317 "write_zeroes": true, 00:14:12.317 "zcopy": true, 00:14:12.317 "get_zone_info": false, 00:14:12.317 "zone_management": false, 00:14:12.317 "zone_append": false, 00:14:12.317 "compare": false, 00:14:12.317 "compare_and_write": false, 00:14:12.317 "abort": true, 00:14:12.317 "seek_hole": false, 00:14:12.317 "seek_data": false, 00:14:12.317 "copy": true, 00:14:12.317 "nvme_iov_md": false 00:14:12.317 }, 00:14:12.317 "memory_domains": [ 00:14:12.317 { 00:14:12.317 "dma_device_id": "system", 00:14:12.317 "dma_device_type": 1 00:14:12.317 }, 00:14:12.317 { 00:14:12.317 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:12.317 "dma_device_type": 2 00:14:12.317 } 00:14:12.317 ], 00:14:12.317 "driver_specific": {} 00:14:12.317 } 00:14:12.317 ] 00:14:12.317 20:09:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.317 20:09:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:12.317 20:09:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:12.317 20:09:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:12.317 20:09:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:12.317 20:09:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.317 20:09:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.317 [2024-12-08 20:09:44.089187] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:12.317 [2024-12-08 20:09:44.089267] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:12.317 [2024-12-08 20:09:44.089305] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:12.317 [2024-12-08 20:09:44.091077] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:12.317 20:09:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.317 20:09:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:12.317 20:09:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:12.317 20:09:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:12.317 20:09:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:12.317 20:09:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:12.317 20:09:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:12.317 20:09:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:12.317 20:09:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:12.317 20:09:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:12.317 20:09:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:12.317 20:09:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:12.317 20:09:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:12.317 20:09:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.318 20:09:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.318 20:09:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.318 20:09:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:12.318 "name": "Existed_Raid", 00:14:12.318 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:12.318 "strip_size_kb": 64, 00:14:12.318 "state": "configuring", 00:14:12.318 "raid_level": "raid5f", 00:14:12.318 "superblock": false, 00:14:12.318 "num_base_bdevs": 3, 00:14:12.318 "num_base_bdevs_discovered": 2, 00:14:12.318 "num_base_bdevs_operational": 3, 00:14:12.318 "base_bdevs_list": [ 00:14:12.318 { 00:14:12.318 "name": "BaseBdev1", 00:14:12.318 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:12.318 "is_configured": false, 00:14:12.318 "data_offset": 0, 00:14:12.318 "data_size": 0 00:14:12.318 }, 00:14:12.318 { 00:14:12.318 "name": "BaseBdev2", 00:14:12.318 "uuid": "ac095b2d-d135-4590-a769-156aa6cf7308", 00:14:12.318 "is_configured": true, 00:14:12.318 "data_offset": 0, 00:14:12.318 "data_size": 65536 00:14:12.318 }, 00:14:12.318 { 00:14:12.318 "name": "BaseBdev3", 00:14:12.318 "uuid": "f7119e09-71fc-4aca-a6d7-343cf42afee2", 00:14:12.318 "is_configured": true, 00:14:12.318 "data_offset": 0, 00:14:12.318 "data_size": 65536 00:14:12.318 } 00:14:12.318 ] 00:14:12.318 }' 00:14:12.318 20:09:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:12.318 20:09:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.885 20:09:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:12.885 20:09:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.885 20:09:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.885 [2024-12-08 20:09:44.572376] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:12.885 20:09:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.885 20:09:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:12.885 20:09:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:12.885 20:09:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:12.885 20:09:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:12.885 20:09:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:12.885 20:09:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:12.885 20:09:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:12.885 20:09:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:12.885 20:09:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:12.885 20:09:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:12.885 20:09:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:12.885 20:09:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:12.885 20:09:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.885 20:09:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.885 20:09:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.885 20:09:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:12.885 "name": "Existed_Raid", 00:14:12.885 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:12.885 "strip_size_kb": 64, 00:14:12.885 "state": "configuring", 00:14:12.885 "raid_level": "raid5f", 00:14:12.885 "superblock": false, 00:14:12.885 "num_base_bdevs": 3, 00:14:12.885 "num_base_bdevs_discovered": 1, 00:14:12.885 "num_base_bdevs_operational": 3, 00:14:12.885 "base_bdevs_list": [ 00:14:12.885 { 00:14:12.885 "name": "BaseBdev1", 00:14:12.885 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:12.885 "is_configured": false, 00:14:12.885 "data_offset": 0, 00:14:12.885 "data_size": 0 00:14:12.886 }, 00:14:12.886 { 00:14:12.886 "name": null, 00:14:12.886 "uuid": "ac095b2d-d135-4590-a769-156aa6cf7308", 00:14:12.886 "is_configured": false, 00:14:12.886 "data_offset": 0, 00:14:12.886 "data_size": 65536 00:14:12.886 }, 00:14:12.886 { 00:14:12.886 "name": "BaseBdev3", 00:14:12.886 "uuid": "f7119e09-71fc-4aca-a6d7-343cf42afee2", 00:14:12.886 "is_configured": true, 00:14:12.886 "data_offset": 0, 00:14:12.886 "data_size": 65536 00:14:12.886 } 00:14:12.886 ] 00:14:12.886 }' 00:14:12.886 20:09:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:12.886 20:09:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.145 20:09:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:13.145 20:09:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:13.145 20:09:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.145 20:09:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.145 20:09:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.145 20:09:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:14:13.145 20:09:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:13.145 20:09:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.145 20:09:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.145 [2024-12-08 20:09:45.103205] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:13.145 BaseBdev1 00:14:13.145 20:09:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.145 20:09:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:14:13.145 20:09:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:13.145 20:09:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:13.145 20:09:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:13.145 20:09:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:13.145 20:09:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:13.145 20:09:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:13.145 20:09:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.145 20:09:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.145 20:09:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.145 20:09:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:13.145 20:09:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.145 20:09:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.404 [ 00:14:13.404 { 00:14:13.404 "name": "BaseBdev1", 00:14:13.404 "aliases": [ 00:14:13.404 "50fd7b46-f804-4195-9a15-80ad824b621a" 00:14:13.404 ], 00:14:13.404 "product_name": "Malloc disk", 00:14:13.404 "block_size": 512, 00:14:13.404 "num_blocks": 65536, 00:14:13.404 "uuid": "50fd7b46-f804-4195-9a15-80ad824b621a", 00:14:13.404 "assigned_rate_limits": { 00:14:13.404 "rw_ios_per_sec": 0, 00:14:13.404 "rw_mbytes_per_sec": 0, 00:14:13.404 "r_mbytes_per_sec": 0, 00:14:13.404 "w_mbytes_per_sec": 0 00:14:13.404 }, 00:14:13.404 "claimed": true, 00:14:13.404 "claim_type": "exclusive_write", 00:14:13.404 "zoned": false, 00:14:13.404 "supported_io_types": { 00:14:13.404 "read": true, 00:14:13.404 "write": true, 00:14:13.404 "unmap": true, 00:14:13.404 "flush": true, 00:14:13.404 "reset": true, 00:14:13.404 "nvme_admin": false, 00:14:13.404 "nvme_io": false, 00:14:13.404 "nvme_io_md": false, 00:14:13.404 "write_zeroes": true, 00:14:13.404 "zcopy": true, 00:14:13.404 "get_zone_info": false, 00:14:13.404 "zone_management": false, 00:14:13.404 "zone_append": false, 00:14:13.404 "compare": false, 00:14:13.404 "compare_and_write": false, 00:14:13.404 "abort": true, 00:14:13.404 "seek_hole": false, 00:14:13.404 "seek_data": false, 00:14:13.404 "copy": true, 00:14:13.404 "nvme_iov_md": false 00:14:13.404 }, 00:14:13.404 "memory_domains": [ 00:14:13.404 { 00:14:13.404 "dma_device_id": "system", 00:14:13.404 "dma_device_type": 1 00:14:13.404 }, 00:14:13.404 { 00:14:13.404 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:13.404 "dma_device_type": 2 00:14:13.404 } 00:14:13.404 ], 00:14:13.404 "driver_specific": {} 00:14:13.404 } 00:14:13.404 ] 00:14:13.404 20:09:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.404 20:09:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:13.404 20:09:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:13.404 20:09:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:13.404 20:09:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:13.404 20:09:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:13.404 20:09:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:13.404 20:09:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:13.404 20:09:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:13.404 20:09:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:13.404 20:09:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:13.404 20:09:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:13.404 20:09:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:13.404 20:09:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:13.404 20:09:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.404 20:09:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.404 20:09:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.404 20:09:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:13.404 "name": "Existed_Raid", 00:14:13.404 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:13.404 "strip_size_kb": 64, 00:14:13.404 "state": "configuring", 00:14:13.404 "raid_level": "raid5f", 00:14:13.404 "superblock": false, 00:14:13.404 "num_base_bdevs": 3, 00:14:13.404 "num_base_bdevs_discovered": 2, 00:14:13.404 "num_base_bdevs_operational": 3, 00:14:13.404 "base_bdevs_list": [ 00:14:13.404 { 00:14:13.404 "name": "BaseBdev1", 00:14:13.404 "uuid": "50fd7b46-f804-4195-9a15-80ad824b621a", 00:14:13.404 "is_configured": true, 00:14:13.404 "data_offset": 0, 00:14:13.404 "data_size": 65536 00:14:13.404 }, 00:14:13.404 { 00:14:13.404 "name": null, 00:14:13.404 "uuid": "ac095b2d-d135-4590-a769-156aa6cf7308", 00:14:13.404 "is_configured": false, 00:14:13.404 "data_offset": 0, 00:14:13.404 "data_size": 65536 00:14:13.404 }, 00:14:13.404 { 00:14:13.404 "name": "BaseBdev3", 00:14:13.404 "uuid": "f7119e09-71fc-4aca-a6d7-343cf42afee2", 00:14:13.404 "is_configured": true, 00:14:13.404 "data_offset": 0, 00:14:13.404 "data_size": 65536 00:14:13.404 } 00:14:13.404 ] 00:14:13.404 }' 00:14:13.404 20:09:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:13.404 20:09:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.663 20:09:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:13.663 20:09:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:13.663 20:09:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.663 20:09:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.663 20:09:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.923 20:09:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:14:13.923 20:09:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:14:13.923 20:09:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.923 20:09:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.923 [2024-12-08 20:09:45.654325] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:13.923 20:09:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.923 20:09:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:13.923 20:09:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:13.923 20:09:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:13.923 20:09:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:13.923 20:09:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:13.923 20:09:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:13.923 20:09:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:13.923 20:09:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:13.923 20:09:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:13.923 20:09:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:13.923 20:09:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:13.923 20:09:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:13.923 20:09:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.923 20:09:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.923 20:09:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.923 20:09:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:13.923 "name": "Existed_Raid", 00:14:13.923 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:13.923 "strip_size_kb": 64, 00:14:13.923 "state": "configuring", 00:14:13.923 "raid_level": "raid5f", 00:14:13.923 "superblock": false, 00:14:13.923 "num_base_bdevs": 3, 00:14:13.923 "num_base_bdevs_discovered": 1, 00:14:13.923 "num_base_bdevs_operational": 3, 00:14:13.923 "base_bdevs_list": [ 00:14:13.923 { 00:14:13.923 "name": "BaseBdev1", 00:14:13.923 "uuid": "50fd7b46-f804-4195-9a15-80ad824b621a", 00:14:13.923 "is_configured": true, 00:14:13.923 "data_offset": 0, 00:14:13.923 "data_size": 65536 00:14:13.923 }, 00:14:13.923 { 00:14:13.924 "name": null, 00:14:13.924 "uuid": "ac095b2d-d135-4590-a769-156aa6cf7308", 00:14:13.924 "is_configured": false, 00:14:13.924 "data_offset": 0, 00:14:13.924 "data_size": 65536 00:14:13.924 }, 00:14:13.924 { 00:14:13.924 "name": null, 00:14:13.924 "uuid": "f7119e09-71fc-4aca-a6d7-343cf42afee2", 00:14:13.924 "is_configured": false, 00:14:13.924 "data_offset": 0, 00:14:13.924 "data_size": 65536 00:14:13.924 } 00:14:13.924 ] 00:14:13.924 }' 00:14:13.924 20:09:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:13.924 20:09:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.183 20:09:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:14.183 20:09:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:14.183 20:09:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.183 20:09:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.183 20:09:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.183 20:09:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:14:14.183 20:09:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:14.183 20:09:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.183 20:09:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.183 [2024-12-08 20:09:46.117545] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:14.183 20:09:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.183 20:09:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:14.183 20:09:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:14.183 20:09:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:14.183 20:09:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:14.183 20:09:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:14.183 20:09:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:14.183 20:09:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:14.183 20:09:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:14.183 20:09:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:14.183 20:09:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:14.183 20:09:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:14.183 20:09:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.183 20:09:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.183 20:09:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:14.183 20:09:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.443 20:09:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:14.443 "name": "Existed_Raid", 00:14:14.443 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:14.443 "strip_size_kb": 64, 00:14:14.443 "state": "configuring", 00:14:14.443 "raid_level": "raid5f", 00:14:14.443 "superblock": false, 00:14:14.443 "num_base_bdevs": 3, 00:14:14.443 "num_base_bdevs_discovered": 2, 00:14:14.443 "num_base_bdevs_operational": 3, 00:14:14.443 "base_bdevs_list": [ 00:14:14.443 { 00:14:14.443 "name": "BaseBdev1", 00:14:14.443 "uuid": "50fd7b46-f804-4195-9a15-80ad824b621a", 00:14:14.443 "is_configured": true, 00:14:14.443 "data_offset": 0, 00:14:14.443 "data_size": 65536 00:14:14.443 }, 00:14:14.443 { 00:14:14.443 "name": null, 00:14:14.443 "uuid": "ac095b2d-d135-4590-a769-156aa6cf7308", 00:14:14.443 "is_configured": false, 00:14:14.443 "data_offset": 0, 00:14:14.443 "data_size": 65536 00:14:14.443 }, 00:14:14.443 { 00:14:14.443 "name": "BaseBdev3", 00:14:14.443 "uuid": "f7119e09-71fc-4aca-a6d7-343cf42afee2", 00:14:14.443 "is_configured": true, 00:14:14.443 "data_offset": 0, 00:14:14.443 "data_size": 65536 00:14:14.443 } 00:14:14.443 ] 00:14:14.443 }' 00:14:14.443 20:09:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:14.443 20:09:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.702 20:09:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:14.702 20:09:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.702 20:09:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.702 20:09:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:14.702 20:09:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.702 20:09:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:14:14.702 20:09:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:14.702 20:09:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.702 20:09:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.702 [2024-12-08 20:09:46.632694] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:14.960 20:09:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.960 20:09:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:14.960 20:09:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:14.960 20:09:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:14.960 20:09:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:14.960 20:09:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:14.960 20:09:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:14.960 20:09:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:14.960 20:09:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:14.960 20:09:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:14.960 20:09:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:14.960 20:09:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:14.960 20:09:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:14.960 20:09:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.960 20:09:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.960 20:09:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.960 20:09:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:14.960 "name": "Existed_Raid", 00:14:14.960 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:14.960 "strip_size_kb": 64, 00:14:14.960 "state": "configuring", 00:14:14.960 "raid_level": "raid5f", 00:14:14.960 "superblock": false, 00:14:14.960 "num_base_bdevs": 3, 00:14:14.960 "num_base_bdevs_discovered": 1, 00:14:14.960 "num_base_bdevs_operational": 3, 00:14:14.960 "base_bdevs_list": [ 00:14:14.960 { 00:14:14.960 "name": null, 00:14:14.960 "uuid": "50fd7b46-f804-4195-9a15-80ad824b621a", 00:14:14.960 "is_configured": false, 00:14:14.960 "data_offset": 0, 00:14:14.960 "data_size": 65536 00:14:14.960 }, 00:14:14.960 { 00:14:14.960 "name": null, 00:14:14.960 "uuid": "ac095b2d-d135-4590-a769-156aa6cf7308", 00:14:14.960 "is_configured": false, 00:14:14.960 "data_offset": 0, 00:14:14.960 "data_size": 65536 00:14:14.960 }, 00:14:14.960 { 00:14:14.960 "name": "BaseBdev3", 00:14:14.960 "uuid": "f7119e09-71fc-4aca-a6d7-343cf42afee2", 00:14:14.960 "is_configured": true, 00:14:14.960 "data_offset": 0, 00:14:14.960 "data_size": 65536 00:14:14.960 } 00:14:14.960 ] 00:14:14.960 }' 00:14:14.960 20:09:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:14.960 20:09:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.219 20:09:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:15.219 20:09:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:15.219 20:09:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.219 20:09:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.219 20:09:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.478 20:09:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:14:15.478 20:09:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:15.478 20:09:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.478 20:09:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.478 [2024-12-08 20:09:47.217798] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:15.478 20:09:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.478 20:09:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:15.478 20:09:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:15.478 20:09:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:15.478 20:09:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:15.478 20:09:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:15.478 20:09:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:15.479 20:09:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:15.479 20:09:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:15.479 20:09:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:15.479 20:09:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:15.479 20:09:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:15.479 20:09:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:15.479 20:09:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.479 20:09:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.479 20:09:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.479 20:09:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:15.479 "name": "Existed_Raid", 00:14:15.479 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:15.479 "strip_size_kb": 64, 00:14:15.479 "state": "configuring", 00:14:15.479 "raid_level": "raid5f", 00:14:15.479 "superblock": false, 00:14:15.479 "num_base_bdevs": 3, 00:14:15.479 "num_base_bdevs_discovered": 2, 00:14:15.479 "num_base_bdevs_operational": 3, 00:14:15.479 "base_bdevs_list": [ 00:14:15.479 { 00:14:15.479 "name": null, 00:14:15.479 "uuid": "50fd7b46-f804-4195-9a15-80ad824b621a", 00:14:15.479 "is_configured": false, 00:14:15.479 "data_offset": 0, 00:14:15.479 "data_size": 65536 00:14:15.479 }, 00:14:15.479 { 00:14:15.479 "name": "BaseBdev2", 00:14:15.479 "uuid": "ac095b2d-d135-4590-a769-156aa6cf7308", 00:14:15.479 "is_configured": true, 00:14:15.479 "data_offset": 0, 00:14:15.479 "data_size": 65536 00:14:15.479 }, 00:14:15.479 { 00:14:15.479 "name": "BaseBdev3", 00:14:15.479 "uuid": "f7119e09-71fc-4aca-a6d7-343cf42afee2", 00:14:15.479 "is_configured": true, 00:14:15.479 "data_offset": 0, 00:14:15.479 "data_size": 65536 00:14:15.479 } 00:14:15.479 ] 00:14:15.479 }' 00:14:15.479 20:09:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:15.479 20:09:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.738 20:09:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:15.738 20:09:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.738 20:09:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.738 20:09:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:15.738 20:09:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.738 20:09:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:14:15.738 20:09:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:15.738 20:09:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:15.738 20:09:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.738 20:09:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.011 20:09:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.011 20:09:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 50fd7b46-f804-4195-9a15-80ad824b621a 00:14:16.011 20:09:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.011 20:09:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.011 [2024-12-08 20:09:47.781055] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:16.011 [2024-12-08 20:09:47.781144] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:14:16.011 [2024-12-08 20:09:47.781170] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:14:16.011 [2024-12-08 20:09:47.781468] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:16.011 [2024-12-08 20:09:47.786752] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:14:16.011 [2024-12-08 20:09:47.786806] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:14:16.011 [2024-12-08 20:09:47.787115] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:16.011 NewBaseBdev 00:14:16.011 20:09:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.011 20:09:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:14:16.011 20:09:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:14:16.011 20:09:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:16.011 20:09:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:16.011 20:09:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:16.011 20:09:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:16.011 20:09:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:16.011 20:09:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.011 20:09:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.011 20:09:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.011 20:09:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:16.011 20:09:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.011 20:09:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.011 [ 00:14:16.011 { 00:14:16.011 "name": "NewBaseBdev", 00:14:16.011 "aliases": [ 00:14:16.011 "50fd7b46-f804-4195-9a15-80ad824b621a" 00:14:16.011 ], 00:14:16.011 "product_name": "Malloc disk", 00:14:16.011 "block_size": 512, 00:14:16.011 "num_blocks": 65536, 00:14:16.011 "uuid": "50fd7b46-f804-4195-9a15-80ad824b621a", 00:14:16.011 "assigned_rate_limits": { 00:14:16.011 "rw_ios_per_sec": 0, 00:14:16.011 "rw_mbytes_per_sec": 0, 00:14:16.011 "r_mbytes_per_sec": 0, 00:14:16.011 "w_mbytes_per_sec": 0 00:14:16.011 }, 00:14:16.011 "claimed": true, 00:14:16.011 "claim_type": "exclusive_write", 00:14:16.011 "zoned": false, 00:14:16.011 "supported_io_types": { 00:14:16.011 "read": true, 00:14:16.011 "write": true, 00:14:16.011 "unmap": true, 00:14:16.011 "flush": true, 00:14:16.011 "reset": true, 00:14:16.011 "nvme_admin": false, 00:14:16.011 "nvme_io": false, 00:14:16.011 "nvme_io_md": false, 00:14:16.011 "write_zeroes": true, 00:14:16.011 "zcopy": true, 00:14:16.011 "get_zone_info": false, 00:14:16.011 "zone_management": false, 00:14:16.011 "zone_append": false, 00:14:16.011 "compare": false, 00:14:16.011 "compare_and_write": false, 00:14:16.011 "abort": true, 00:14:16.011 "seek_hole": false, 00:14:16.011 "seek_data": false, 00:14:16.011 "copy": true, 00:14:16.011 "nvme_iov_md": false 00:14:16.011 }, 00:14:16.011 "memory_domains": [ 00:14:16.011 { 00:14:16.011 "dma_device_id": "system", 00:14:16.011 "dma_device_type": 1 00:14:16.011 }, 00:14:16.011 { 00:14:16.011 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:16.011 "dma_device_type": 2 00:14:16.011 } 00:14:16.011 ], 00:14:16.011 "driver_specific": {} 00:14:16.011 } 00:14:16.011 ] 00:14:16.011 20:09:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.011 20:09:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:16.011 20:09:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:14:16.011 20:09:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:16.011 20:09:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:16.011 20:09:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:16.011 20:09:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:16.011 20:09:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:16.011 20:09:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:16.011 20:09:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:16.011 20:09:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:16.011 20:09:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:16.011 20:09:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:16.011 20:09:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:16.011 20:09:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.011 20:09:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.011 20:09:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.011 20:09:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:16.011 "name": "Existed_Raid", 00:14:16.011 "uuid": "038a8f4a-e0a2-4114-90d0-ac239177feb5", 00:14:16.011 "strip_size_kb": 64, 00:14:16.011 "state": "online", 00:14:16.011 "raid_level": "raid5f", 00:14:16.011 "superblock": false, 00:14:16.011 "num_base_bdevs": 3, 00:14:16.011 "num_base_bdevs_discovered": 3, 00:14:16.011 "num_base_bdevs_operational": 3, 00:14:16.011 "base_bdevs_list": [ 00:14:16.011 { 00:14:16.011 "name": "NewBaseBdev", 00:14:16.011 "uuid": "50fd7b46-f804-4195-9a15-80ad824b621a", 00:14:16.011 "is_configured": true, 00:14:16.011 "data_offset": 0, 00:14:16.011 "data_size": 65536 00:14:16.011 }, 00:14:16.011 { 00:14:16.011 "name": "BaseBdev2", 00:14:16.011 "uuid": "ac095b2d-d135-4590-a769-156aa6cf7308", 00:14:16.011 "is_configured": true, 00:14:16.011 "data_offset": 0, 00:14:16.011 "data_size": 65536 00:14:16.011 }, 00:14:16.011 { 00:14:16.011 "name": "BaseBdev3", 00:14:16.011 "uuid": "f7119e09-71fc-4aca-a6d7-343cf42afee2", 00:14:16.011 "is_configured": true, 00:14:16.011 "data_offset": 0, 00:14:16.011 "data_size": 65536 00:14:16.011 } 00:14:16.011 ] 00:14:16.011 }' 00:14:16.011 20:09:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:16.011 20:09:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.291 20:09:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:14:16.291 20:09:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:16.291 20:09:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:16.291 20:09:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:16.291 20:09:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:16.291 20:09:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:16.291 20:09:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:16.291 20:09:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:16.291 20:09:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.291 20:09:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.291 [2024-12-08 20:09:48.264726] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:16.560 20:09:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.560 20:09:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:16.560 "name": "Existed_Raid", 00:14:16.560 "aliases": [ 00:14:16.560 "038a8f4a-e0a2-4114-90d0-ac239177feb5" 00:14:16.560 ], 00:14:16.560 "product_name": "Raid Volume", 00:14:16.560 "block_size": 512, 00:14:16.560 "num_blocks": 131072, 00:14:16.560 "uuid": "038a8f4a-e0a2-4114-90d0-ac239177feb5", 00:14:16.560 "assigned_rate_limits": { 00:14:16.560 "rw_ios_per_sec": 0, 00:14:16.561 "rw_mbytes_per_sec": 0, 00:14:16.561 "r_mbytes_per_sec": 0, 00:14:16.561 "w_mbytes_per_sec": 0 00:14:16.561 }, 00:14:16.561 "claimed": false, 00:14:16.561 "zoned": false, 00:14:16.561 "supported_io_types": { 00:14:16.561 "read": true, 00:14:16.561 "write": true, 00:14:16.561 "unmap": false, 00:14:16.561 "flush": false, 00:14:16.561 "reset": true, 00:14:16.561 "nvme_admin": false, 00:14:16.561 "nvme_io": false, 00:14:16.561 "nvme_io_md": false, 00:14:16.561 "write_zeroes": true, 00:14:16.561 "zcopy": false, 00:14:16.561 "get_zone_info": false, 00:14:16.561 "zone_management": false, 00:14:16.561 "zone_append": false, 00:14:16.561 "compare": false, 00:14:16.561 "compare_and_write": false, 00:14:16.561 "abort": false, 00:14:16.561 "seek_hole": false, 00:14:16.561 "seek_data": false, 00:14:16.561 "copy": false, 00:14:16.561 "nvme_iov_md": false 00:14:16.561 }, 00:14:16.561 "driver_specific": { 00:14:16.561 "raid": { 00:14:16.561 "uuid": "038a8f4a-e0a2-4114-90d0-ac239177feb5", 00:14:16.561 "strip_size_kb": 64, 00:14:16.561 "state": "online", 00:14:16.561 "raid_level": "raid5f", 00:14:16.561 "superblock": false, 00:14:16.561 "num_base_bdevs": 3, 00:14:16.561 "num_base_bdevs_discovered": 3, 00:14:16.561 "num_base_bdevs_operational": 3, 00:14:16.561 "base_bdevs_list": [ 00:14:16.561 { 00:14:16.561 "name": "NewBaseBdev", 00:14:16.561 "uuid": "50fd7b46-f804-4195-9a15-80ad824b621a", 00:14:16.561 "is_configured": true, 00:14:16.561 "data_offset": 0, 00:14:16.561 "data_size": 65536 00:14:16.561 }, 00:14:16.561 { 00:14:16.561 "name": "BaseBdev2", 00:14:16.561 "uuid": "ac095b2d-d135-4590-a769-156aa6cf7308", 00:14:16.561 "is_configured": true, 00:14:16.561 "data_offset": 0, 00:14:16.561 "data_size": 65536 00:14:16.561 }, 00:14:16.561 { 00:14:16.561 "name": "BaseBdev3", 00:14:16.561 "uuid": "f7119e09-71fc-4aca-a6d7-343cf42afee2", 00:14:16.561 "is_configured": true, 00:14:16.561 "data_offset": 0, 00:14:16.561 "data_size": 65536 00:14:16.561 } 00:14:16.561 ] 00:14:16.561 } 00:14:16.561 } 00:14:16.561 }' 00:14:16.561 20:09:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:16.561 20:09:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:14:16.561 BaseBdev2 00:14:16.561 BaseBdev3' 00:14:16.561 20:09:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:16.561 20:09:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:16.561 20:09:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:16.561 20:09:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:14:16.561 20:09:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.561 20:09:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.561 20:09:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:16.561 20:09:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.561 20:09:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:16.561 20:09:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:16.561 20:09:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:16.561 20:09:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:16.561 20:09:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.561 20:09:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.561 20:09:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:16.561 20:09:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.561 20:09:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:16.561 20:09:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:16.561 20:09:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:16.561 20:09:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:16.561 20:09:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.561 20:09:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:16.561 20:09:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.561 20:09:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.561 20:09:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:16.561 20:09:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:16.561 20:09:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:16.561 20:09:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.561 20:09:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.561 [2024-12-08 20:09:48.524078] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:16.561 [2024-12-08 20:09:48.524141] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:16.561 [2024-12-08 20:09:48.524226] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:16.561 [2024-12-08 20:09:48.524558] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:16.561 [2024-12-08 20:09:48.524616] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:14:16.561 20:09:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.561 20:09:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 79592 00:14:16.561 20:09:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 79592 ']' 00:14:16.561 20:09:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # kill -0 79592 00:14:16.561 20:09:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # uname 00:14:16.871 20:09:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:16.871 20:09:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79592 00:14:16.871 20:09:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:16.871 20:09:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:16.871 20:09:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79592' 00:14:16.871 killing process with pid 79592 00:14:16.871 20:09:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@973 -- # kill 79592 00:14:16.871 [2024-12-08 20:09:48.571016] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:16.871 20:09:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@978 -- # wait 79592 00:14:17.144 [2024-12-08 20:09:48.858818] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:18.100 20:09:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:14:18.100 ************************************ 00:14:18.100 END TEST raid5f_state_function_test 00:14:18.100 ************************************ 00:14:18.100 00:14:18.100 real 0m10.363s 00:14:18.100 user 0m16.489s 00:14:18.100 sys 0m1.827s 00:14:18.100 20:09:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:18.100 20:09:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.100 20:09:49 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:14:18.100 20:09:49 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:18.100 20:09:49 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:18.100 20:09:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:18.100 ************************************ 00:14:18.100 START TEST raid5f_state_function_test_sb 00:14:18.100 ************************************ 00:14:18.100 20:09:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 3 true 00:14:18.100 20:09:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:14:18.100 20:09:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:14:18.100 20:09:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:14:18.100 20:09:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:18.100 20:09:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:18.101 20:09:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:18.101 20:09:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:18.101 20:09:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:18.101 20:09:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:18.101 20:09:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:18.101 20:09:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:18.101 20:09:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:18.101 20:09:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:14:18.101 20:09:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:18.101 20:09:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:18.101 20:09:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:18.101 20:09:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:18.101 20:09:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:18.101 20:09:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:18.101 20:09:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:18.101 20:09:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:18.101 20:09:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:14:18.101 20:09:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:14:18.101 20:09:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:14:18.101 20:09:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:14:18.101 20:09:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:14:18.101 Process raid pid: 80212 00:14:18.101 20:09:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=80212 00:14:18.101 20:09:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80212' 00:14:18.101 20:09:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 80212 00:14:18.101 20:09:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:18.101 20:09:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 80212 ']' 00:14:18.101 20:09:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:18.101 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:18.101 20:09:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:18.101 20:09:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:18.101 20:09:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:18.101 20:09:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.362 [2024-12-08 20:09:50.087138] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:14:18.362 [2024-12-08 20:09:50.087268] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:18.362 [2024-12-08 20:09:50.249404] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:18.625 [2024-12-08 20:09:50.355759] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:18.625 [2024-12-08 20:09:50.548508] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:18.625 [2024-12-08 20:09:50.548551] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:19.194 20:09:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:19.194 20:09:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:14:19.194 20:09:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:19.194 20:09:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.194 20:09:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.194 [2024-12-08 20:09:50.909810] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:19.194 [2024-12-08 20:09:50.909865] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:19.194 [2024-12-08 20:09:50.909880] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:19.194 [2024-12-08 20:09:50.909890] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:19.194 [2024-12-08 20:09:50.909896] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:19.194 [2024-12-08 20:09:50.909905] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:19.194 20:09:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.194 20:09:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:19.194 20:09:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:19.194 20:09:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:19.194 20:09:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:19.194 20:09:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:19.194 20:09:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:19.194 20:09:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:19.194 20:09:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:19.194 20:09:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:19.194 20:09:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:19.194 20:09:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:19.194 20:09:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:19.194 20:09:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.194 20:09:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.194 20:09:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.194 20:09:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:19.194 "name": "Existed_Raid", 00:14:19.194 "uuid": "a21615c1-9711-44b5-a3ba-42bea5ce838b", 00:14:19.194 "strip_size_kb": 64, 00:14:19.194 "state": "configuring", 00:14:19.194 "raid_level": "raid5f", 00:14:19.194 "superblock": true, 00:14:19.194 "num_base_bdevs": 3, 00:14:19.194 "num_base_bdevs_discovered": 0, 00:14:19.194 "num_base_bdevs_operational": 3, 00:14:19.194 "base_bdevs_list": [ 00:14:19.194 { 00:14:19.194 "name": "BaseBdev1", 00:14:19.194 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:19.194 "is_configured": false, 00:14:19.194 "data_offset": 0, 00:14:19.194 "data_size": 0 00:14:19.194 }, 00:14:19.194 { 00:14:19.194 "name": "BaseBdev2", 00:14:19.194 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:19.194 "is_configured": false, 00:14:19.194 "data_offset": 0, 00:14:19.194 "data_size": 0 00:14:19.194 }, 00:14:19.194 { 00:14:19.194 "name": "BaseBdev3", 00:14:19.194 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:19.194 "is_configured": false, 00:14:19.194 "data_offset": 0, 00:14:19.194 "data_size": 0 00:14:19.194 } 00:14:19.194 ] 00:14:19.194 }' 00:14:19.194 20:09:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:19.194 20:09:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.454 20:09:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:19.454 20:09:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.454 20:09:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.454 [2024-12-08 20:09:51.364979] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:19.454 [2024-12-08 20:09:51.365048] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:14:19.454 20:09:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.454 20:09:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:19.454 20:09:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.454 20:09:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.454 [2024-12-08 20:09:51.376974] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:19.454 [2024-12-08 20:09:51.377060] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:19.454 [2024-12-08 20:09:51.377087] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:19.454 [2024-12-08 20:09:51.377110] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:19.454 [2024-12-08 20:09:51.377128] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:19.454 [2024-12-08 20:09:51.377163] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:19.454 20:09:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.454 20:09:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:19.454 20:09:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.454 20:09:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.454 [2024-12-08 20:09:51.422443] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:19.454 BaseBdev1 00:14:19.454 20:09:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.454 20:09:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:19.454 20:09:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:19.454 20:09:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:19.454 20:09:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:19.454 20:09:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:19.454 20:09:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:19.454 20:09:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:19.454 20:09:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.454 20:09:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.713 20:09:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.713 20:09:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:19.713 20:09:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.713 20:09:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.713 [ 00:14:19.713 { 00:14:19.713 "name": "BaseBdev1", 00:14:19.713 "aliases": [ 00:14:19.713 "20ac9619-f596-44aa-8c40-6d9140b43cb4" 00:14:19.713 ], 00:14:19.713 "product_name": "Malloc disk", 00:14:19.713 "block_size": 512, 00:14:19.713 "num_blocks": 65536, 00:14:19.713 "uuid": "20ac9619-f596-44aa-8c40-6d9140b43cb4", 00:14:19.713 "assigned_rate_limits": { 00:14:19.713 "rw_ios_per_sec": 0, 00:14:19.713 "rw_mbytes_per_sec": 0, 00:14:19.713 "r_mbytes_per_sec": 0, 00:14:19.713 "w_mbytes_per_sec": 0 00:14:19.713 }, 00:14:19.713 "claimed": true, 00:14:19.713 "claim_type": "exclusive_write", 00:14:19.713 "zoned": false, 00:14:19.713 "supported_io_types": { 00:14:19.713 "read": true, 00:14:19.713 "write": true, 00:14:19.713 "unmap": true, 00:14:19.713 "flush": true, 00:14:19.713 "reset": true, 00:14:19.713 "nvme_admin": false, 00:14:19.713 "nvme_io": false, 00:14:19.713 "nvme_io_md": false, 00:14:19.713 "write_zeroes": true, 00:14:19.713 "zcopy": true, 00:14:19.713 "get_zone_info": false, 00:14:19.713 "zone_management": false, 00:14:19.713 "zone_append": false, 00:14:19.713 "compare": false, 00:14:19.713 "compare_and_write": false, 00:14:19.713 "abort": true, 00:14:19.713 "seek_hole": false, 00:14:19.713 "seek_data": false, 00:14:19.713 "copy": true, 00:14:19.713 "nvme_iov_md": false 00:14:19.713 }, 00:14:19.713 "memory_domains": [ 00:14:19.713 { 00:14:19.713 "dma_device_id": "system", 00:14:19.713 "dma_device_type": 1 00:14:19.713 }, 00:14:19.713 { 00:14:19.713 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:19.713 "dma_device_type": 2 00:14:19.713 } 00:14:19.713 ], 00:14:19.713 "driver_specific": {} 00:14:19.713 } 00:14:19.713 ] 00:14:19.713 20:09:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.713 20:09:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:19.713 20:09:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:19.713 20:09:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:19.713 20:09:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:19.713 20:09:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:19.713 20:09:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:19.713 20:09:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:19.713 20:09:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:19.713 20:09:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:19.713 20:09:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:19.713 20:09:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:19.713 20:09:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:19.713 20:09:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:19.713 20:09:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.713 20:09:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.713 20:09:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.713 20:09:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:19.713 "name": "Existed_Raid", 00:14:19.713 "uuid": "6c39c43f-0d25-44b5-bbf8-ceead079be87", 00:14:19.713 "strip_size_kb": 64, 00:14:19.713 "state": "configuring", 00:14:19.714 "raid_level": "raid5f", 00:14:19.714 "superblock": true, 00:14:19.714 "num_base_bdevs": 3, 00:14:19.714 "num_base_bdevs_discovered": 1, 00:14:19.714 "num_base_bdevs_operational": 3, 00:14:19.714 "base_bdevs_list": [ 00:14:19.714 { 00:14:19.714 "name": "BaseBdev1", 00:14:19.714 "uuid": "20ac9619-f596-44aa-8c40-6d9140b43cb4", 00:14:19.714 "is_configured": true, 00:14:19.714 "data_offset": 2048, 00:14:19.714 "data_size": 63488 00:14:19.714 }, 00:14:19.714 { 00:14:19.714 "name": "BaseBdev2", 00:14:19.714 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:19.714 "is_configured": false, 00:14:19.714 "data_offset": 0, 00:14:19.714 "data_size": 0 00:14:19.714 }, 00:14:19.714 { 00:14:19.714 "name": "BaseBdev3", 00:14:19.714 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:19.714 "is_configured": false, 00:14:19.714 "data_offset": 0, 00:14:19.714 "data_size": 0 00:14:19.714 } 00:14:19.714 ] 00:14:19.714 }' 00:14:19.714 20:09:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:19.714 20:09:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.973 20:09:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:19.973 20:09:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.973 20:09:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.973 [2024-12-08 20:09:51.881667] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:19.973 [2024-12-08 20:09:51.881746] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:14:19.973 20:09:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.973 20:09:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:19.973 20:09:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.973 20:09:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.973 [2024-12-08 20:09:51.893703] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:19.973 [2024-12-08 20:09:51.895477] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:19.973 [2024-12-08 20:09:51.895518] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:19.973 [2024-12-08 20:09:51.895527] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:19.973 [2024-12-08 20:09:51.895535] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:19.973 20:09:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.973 20:09:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:19.973 20:09:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:19.973 20:09:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:19.973 20:09:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:19.973 20:09:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:19.973 20:09:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:19.973 20:09:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:19.973 20:09:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:19.973 20:09:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:19.973 20:09:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:19.973 20:09:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:19.973 20:09:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:19.973 20:09:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:19.973 20:09:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.973 20:09:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.973 20:09:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:19.973 20:09:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.232 20:09:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:20.232 "name": "Existed_Raid", 00:14:20.232 "uuid": "2dc2d34d-2417-4a78-88cd-5a62605b03b8", 00:14:20.232 "strip_size_kb": 64, 00:14:20.232 "state": "configuring", 00:14:20.232 "raid_level": "raid5f", 00:14:20.232 "superblock": true, 00:14:20.232 "num_base_bdevs": 3, 00:14:20.232 "num_base_bdevs_discovered": 1, 00:14:20.232 "num_base_bdevs_operational": 3, 00:14:20.232 "base_bdevs_list": [ 00:14:20.232 { 00:14:20.232 "name": "BaseBdev1", 00:14:20.232 "uuid": "20ac9619-f596-44aa-8c40-6d9140b43cb4", 00:14:20.232 "is_configured": true, 00:14:20.232 "data_offset": 2048, 00:14:20.232 "data_size": 63488 00:14:20.232 }, 00:14:20.232 { 00:14:20.232 "name": "BaseBdev2", 00:14:20.232 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:20.232 "is_configured": false, 00:14:20.232 "data_offset": 0, 00:14:20.232 "data_size": 0 00:14:20.232 }, 00:14:20.232 { 00:14:20.232 "name": "BaseBdev3", 00:14:20.232 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:20.232 "is_configured": false, 00:14:20.232 "data_offset": 0, 00:14:20.232 "data_size": 0 00:14:20.232 } 00:14:20.232 ] 00:14:20.232 }' 00:14:20.232 20:09:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:20.232 20:09:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.491 20:09:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:20.491 20:09:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.491 20:09:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.491 [2024-12-08 20:09:52.378311] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:20.491 BaseBdev2 00:14:20.491 20:09:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.491 20:09:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:20.491 20:09:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:20.491 20:09:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:20.491 20:09:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:20.491 20:09:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:20.491 20:09:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:20.491 20:09:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:20.491 20:09:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.491 20:09:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.491 20:09:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.491 20:09:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:20.491 20:09:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.491 20:09:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.491 [ 00:14:20.491 { 00:14:20.491 "name": "BaseBdev2", 00:14:20.491 "aliases": [ 00:14:20.491 "255d6dc3-acee-4ef5-9ef6-74989fd850a5" 00:14:20.491 ], 00:14:20.491 "product_name": "Malloc disk", 00:14:20.491 "block_size": 512, 00:14:20.491 "num_blocks": 65536, 00:14:20.491 "uuid": "255d6dc3-acee-4ef5-9ef6-74989fd850a5", 00:14:20.491 "assigned_rate_limits": { 00:14:20.491 "rw_ios_per_sec": 0, 00:14:20.491 "rw_mbytes_per_sec": 0, 00:14:20.491 "r_mbytes_per_sec": 0, 00:14:20.491 "w_mbytes_per_sec": 0 00:14:20.491 }, 00:14:20.491 "claimed": true, 00:14:20.491 "claim_type": "exclusive_write", 00:14:20.491 "zoned": false, 00:14:20.491 "supported_io_types": { 00:14:20.491 "read": true, 00:14:20.491 "write": true, 00:14:20.491 "unmap": true, 00:14:20.491 "flush": true, 00:14:20.491 "reset": true, 00:14:20.491 "nvme_admin": false, 00:14:20.491 "nvme_io": false, 00:14:20.491 "nvme_io_md": false, 00:14:20.491 "write_zeroes": true, 00:14:20.491 "zcopy": true, 00:14:20.491 "get_zone_info": false, 00:14:20.491 "zone_management": false, 00:14:20.491 "zone_append": false, 00:14:20.491 "compare": false, 00:14:20.491 "compare_and_write": false, 00:14:20.491 "abort": true, 00:14:20.491 "seek_hole": false, 00:14:20.491 "seek_data": false, 00:14:20.491 "copy": true, 00:14:20.491 "nvme_iov_md": false 00:14:20.491 }, 00:14:20.491 "memory_domains": [ 00:14:20.491 { 00:14:20.491 "dma_device_id": "system", 00:14:20.491 "dma_device_type": 1 00:14:20.491 }, 00:14:20.491 { 00:14:20.491 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:20.491 "dma_device_type": 2 00:14:20.491 } 00:14:20.491 ], 00:14:20.491 "driver_specific": {} 00:14:20.491 } 00:14:20.491 ] 00:14:20.491 20:09:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.491 20:09:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:20.491 20:09:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:20.491 20:09:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:20.491 20:09:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:20.492 20:09:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:20.492 20:09:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:20.492 20:09:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:20.492 20:09:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:20.492 20:09:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:20.492 20:09:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:20.492 20:09:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:20.492 20:09:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:20.492 20:09:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:20.492 20:09:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:20.492 20:09:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:20.492 20:09:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.492 20:09:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.492 20:09:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.750 20:09:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:20.750 "name": "Existed_Raid", 00:14:20.750 "uuid": "2dc2d34d-2417-4a78-88cd-5a62605b03b8", 00:14:20.750 "strip_size_kb": 64, 00:14:20.750 "state": "configuring", 00:14:20.750 "raid_level": "raid5f", 00:14:20.750 "superblock": true, 00:14:20.750 "num_base_bdevs": 3, 00:14:20.750 "num_base_bdevs_discovered": 2, 00:14:20.750 "num_base_bdevs_operational": 3, 00:14:20.750 "base_bdevs_list": [ 00:14:20.750 { 00:14:20.750 "name": "BaseBdev1", 00:14:20.750 "uuid": "20ac9619-f596-44aa-8c40-6d9140b43cb4", 00:14:20.751 "is_configured": true, 00:14:20.751 "data_offset": 2048, 00:14:20.751 "data_size": 63488 00:14:20.751 }, 00:14:20.751 { 00:14:20.751 "name": "BaseBdev2", 00:14:20.751 "uuid": "255d6dc3-acee-4ef5-9ef6-74989fd850a5", 00:14:20.751 "is_configured": true, 00:14:20.751 "data_offset": 2048, 00:14:20.751 "data_size": 63488 00:14:20.751 }, 00:14:20.751 { 00:14:20.751 "name": "BaseBdev3", 00:14:20.751 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:20.751 "is_configured": false, 00:14:20.751 "data_offset": 0, 00:14:20.751 "data_size": 0 00:14:20.751 } 00:14:20.751 ] 00:14:20.751 }' 00:14:20.751 20:09:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:20.751 20:09:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.010 20:09:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:21.010 20:09:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.010 20:09:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.010 [2024-12-08 20:09:52.921076] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:21.010 [2024-12-08 20:09:52.921440] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:21.010 [2024-12-08 20:09:52.921467] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:21.010 [2024-12-08 20:09:52.921728] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:21.010 BaseBdev3 00:14:21.010 20:09:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.010 20:09:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:14:21.010 20:09:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:21.010 20:09:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:21.010 20:09:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:21.010 20:09:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:21.010 20:09:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:21.010 20:09:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:21.010 20:09:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.010 20:09:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.010 [2024-12-08 20:09:52.927765] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:21.010 [2024-12-08 20:09:52.927823] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:14:21.010 [2024-12-08 20:09:52.928047] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:21.010 20:09:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.010 20:09:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:21.010 20:09:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.010 20:09:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.010 [ 00:14:21.010 { 00:14:21.010 "name": "BaseBdev3", 00:14:21.010 "aliases": [ 00:14:21.010 "776b96de-634c-4916-9d52-56b97b042788" 00:14:21.010 ], 00:14:21.010 "product_name": "Malloc disk", 00:14:21.010 "block_size": 512, 00:14:21.010 "num_blocks": 65536, 00:14:21.010 "uuid": "776b96de-634c-4916-9d52-56b97b042788", 00:14:21.010 "assigned_rate_limits": { 00:14:21.010 "rw_ios_per_sec": 0, 00:14:21.010 "rw_mbytes_per_sec": 0, 00:14:21.010 "r_mbytes_per_sec": 0, 00:14:21.010 "w_mbytes_per_sec": 0 00:14:21.010 }, 00:14:21.010 "claimed": true, 00:14:21.010 "claim_type": "exclusive_write", 00:14:21.010 "zoned": false, 00:14:21.010 "supported_io_types": { 00:14:21.010 "read": true, 00:14:21.010 "write": true, 00:14:21.010 "unmap": true, 00:14:21.010 "flush": true, 00:14:21.010 "reset": true, 00:14:21.010 "nvme_admin": false, 00:14:21.010 "nvme_io": false, 00:14:21.010 "nvme_io_md": false, 00:14:21.010 "write_zeroes": true, 00:14:21.010 "zcopy": true, 00:14:21.010 "get_zone_info": false, 00:14:21.010 "zone_management": false, 00:14:21.010 "zone_append": false, 00:14:21.010 "compare": false, 00:14:21.010 "compare_and_write": false, 00:14:21.010 "abort": true, 00:14:21.010 "seek_hole": false, 00:14:21.010 "seek_data": false, 00:14:21.010 "copy": true, 00:14:21.010 "nvme_iov_md": false 00:14:21.010 }, 00:14:21.010 "memory_domains": [ 00:14:21.010 { 00:14:21.010 "dma_device_id": "system", 00:14:21.010 "dma_device_type": 1 00:14:21.010 }, 00:14:21.010 { 00:14:21.010 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:21.010 "dma_device_type": 2 00:14:21.010 } 00:14:21.010 ], 00:14:21.010 "driver_specific": {} 00:14:21.010 } 00:14:21.010 ] 00:14:21.010 20:09:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.010 20:09:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:21.010 20:09:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:21.010 20:09:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:21.010 20:09:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:14:21.010 20:09:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:21.010 20:09:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:21.010 20:09:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:21.010 20:09:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:21.010 20:09:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:21.011 20:09:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:21.011 20:09:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:21.011 20:09:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:21.011 20:09:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:21.011 20:09:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:21.011 20:09:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:21.011 20:09:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.011 20:09:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.291 20:09:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.291 20:09:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:21.291 "name": "Existed_Raid", 00:14:21.291 "uuid": "2dc2d34d-2417-4a78-88cd-5a62605b03b8", 00:14:21.291 "strip_size_kb": 64, 00:14:21.291 "state": "online", 00:14:21.291 "raid_level": "raid5f", 00:14:21.291 "superblock": true, 00:14:21.291 "num_base_bdevs": 3, 00:14:21.291 "num_base_bdevs_discovered": 3, 00:14:21.291 "num_base_bdevs_operational": 3, 00:14:21.291 "base_bdevs_list": [ 00:14:21.291 { 00:14:21.291 "name": "BaseBdev1", 00:14:21.291 "uuid": "20ac9619-f596-44aa-8c40-6d9140b43cb4", 00:14:21.291 "is_configured": true, 00:14:21.291 "data_offset": 2048, 00:14:21.291 "data_size": 63488 00:14:21.291 }, 00:14:21.291 { 00:14:21.291 "name": "BaseBdev2", 00:14:21.291 "uuid": "255d6dc3-acee-4ef5-9ef6-74989fd850a5", 00:14:21.291 "is_configured": true, 00:14:21.291 "data_offset": 2048, 00:14:21.291 "data_size": 63488 00:14:21.291 }, 00:14:21.291 { 00:14:21.291 "name": "BaseBdev3", 00:14:21.291 "uuid": "776b96de-634c-4916-9d52-56b97b042788", 00:14:21.291 "is_configured": true, 00:14:21.291 "data_offset": 2048, 00:14:21.291 "data_size": 63488 00:14:21.291 } 00:14:21.291 ] 00:14:21.291 }' 00:14:21.291 20:09:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:21.291 20:09:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.550 20:09:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:21.550 20:09:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:21.550 20:09:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:21.550 20:09:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:21.550 20:09:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:14:21.550 20:09:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:21.550 20:09:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:21.550 20:09:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.550 20:09:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.550 20:09:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:21.550 [2024-12-08 20:09:53.361710] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:21.550 20:09:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.550 20:09:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:21.550 "name": "Existed_Raid", 00:14:21.550 "aliases": [ 00:14:21.550 "2dc2d34d-2417-4a78-88cd-5a62605b03b8" 00:14:21.550 ], 00:14:21.550 "product_name": "Raid Volume", 00:14:21.550 "block_size": 512, 00:14:21.550 "num_blocks": 126976, 00:14:21.550 "uuid": "2dc2d34d-2417-4a78-88cd-5a62605b03b8", 00:14:21.550 "assigned_rate_limits": { 00:14:21.550 "rw_ios_per_sec": 0, 00:14:21.550 "rw_mbytes_per_sec": 0, 00:14:21.550 "r_mbytes_per_sec": 0, 00:14:21.550 "w_mbytes_per_sec": 0 00:14:21.550 }, 00:14:21.550 "claimed": false, 00:14:21.550 "zoned": false, 00:14:21.550 "supported_io_types": { 00:14:21.550 "read": true, 00:14:21.550 "write": true, 00:14:21.550 "unmap": false, 00:14:21.550 "flush": false, 00:14:21.550 "reset": true, 00:14:21.550 "nvme_admin": false, 00:14:21.550 "nvme_io": false, 00:14:21.550 "nvme_io_md": false, 00:14:21.550 "write_zeroes": true, 00:14:21.550 "zcopy": false, 00:14:21.550 "get_zone_info": false, 00:14:21.550 "zone_management": false, 00:14:21.550 "zone_append": false, 00:14:21.550 "compare": false, 00:14:21.550 "compare_and_write": false, 00:14:21.550 "abort": false, 00:14:21.550 "seek_hole": false, 00:14:21.550 "seek_data": false, 00:14:21.550 "copy": false, 00:14:21.550 "nvme_iov_md": false 00:14:21.550 }, 00:14:21.550 "driver_specific": { 00:14:21.550 "raid": { 00:14:21.550 "uuid": "2dc2d34d-2417-4a78-88cd-5a62605b03b8", 00:14:21.550 "strip_size_kb": 64, 00:14:21.550 "state": "online", 00:14:21.550 "raid_level": "raid5f", 00:14:21.550 "superblock": true, 00:14:21.550 "num_base_bdevs": 3, 00:14:21.550 "num_base_bdevs_discovered": 3, 00:14:21.550 "num_base_bdevs_operational": 3, 00:14:21.550 "base_bdevs_list": [ 00:14:21.550 { 00:14:21.550 "name": "BaseBdev1", 00:14:21.550 "uuid": "20ac9619-f596-44aa-8c40-6d9140b43cb4", 00:14:21.550 "is_configured": true, 00:14:21.550 "data_offset": 2048, 00:14:21.550 "data_size": 63488 00:14:21.550 }, 00:14:21.550 { 00:14:21.550 "name": "BaseBdev2", 00:14:21.550 "uuid": "255d6dc3-acee-4ef5-9ef6-74989fd850a5", 00:14:21.550 "is_configured": true, 00:14:21.550 "data_offset": 2048, 00:14:21.550 "data_size": 63488 00:14:21.550 }, 00:14:21.550 { 00:14:21.550 "name": "BaseBdev3", 00:14:21.550 "uuid": "776b96de-634c-4916-9d52-56b97b042788", 00:14:21.550 "is_configured": true, 00:14:21.550 "data_offset": 2048, 00:14:21.550 "data_size": 63488 00:14:21.550 } 00:14:21.550 ] 00:14:21.550 } 00:14:21.550 } 00:14:21.550 }' 00:14:21.550 20:09:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:21.550 20:09:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:21.550 BaseBdev2 00:14:21.550 BaseBdev3' 00:14:21.550 20:09:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:21.550 20:09:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:21.550 20:09:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:21.550 20:09:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:21.550 20:09:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.550 20:09:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.550 20:09:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:21.550 20:09:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.808 20:09:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:21.809 20:09:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:21.809 20:09:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:21.809 20:09:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:21.809 20:09:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.809 20:09:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.809 20:09:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:21.809 20:09:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.809 20:09:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:21.809 20:09:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:21.809 20:09:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:21.809 20:09:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:21.809 20:09:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.809 20:09:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.809 20:09:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:21.809 20:09:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.809 20:09:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:21.809 20:09:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:21.809 20:09:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:21.809 20:09:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.809 20:09:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.809 [2024-12-08 20:09:53.637075] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:21.809 20:09:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.809 20:09:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:21.809 20:09:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:14:21.809 20:09:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:21.809 20:09:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:14:21.809 20:09:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:14:21.809 20:09:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:14:21.809 20:09:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:21.809 20:09:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:21.809 20:09:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:21.809 20:09:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:21.809 20:09:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:21.809 20:09:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:21.809 20:09:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:21.809 20:09:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:21.809 20:09:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:21.809 20:09:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:21.809 20:09:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:21.809 20:09:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.809 20:09:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.809 20:09:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.809 20:09:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:21.809 "name": "Existed_Raid", 00:14:21.809 "uuid": "2dc2d34d-2417-4a78-88cd-5a62605b03b8", 00:14:21.809 "strip_size_kb": 64, 00:14:21.809 "state": "online", 00:14:21.809 "raid_level": "raid5f", 00:14:21.809 "superblock": true, 00:14:21.809 "num_base_bdevs": 3, 00:14:21.809 "num_base_bdevs_discovered": 2, 00:14:21.809 "num_base_bdevs_operational": 2, 00:14:21.809 "base_bdevs_list": [ 00:14:21.809 { 00:14:21.809 "name": null, 00:14:21.809 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:21.809 "is_configured": false, 00:14:21.809 "data_offset": 0, 00:14:21.809 "data_size": 63488 00:14:21.809 }, 00:14:21.809 { 00:14:21.809 "name": "BaseBdev2", 00:14:21.809 "uuid": "255d6dc3-acee-4ef5-9ef6-74989fd850a5", 00:14:21.809 "is_configured": true, 00:14:21.809 "data_offset": 2048, 00:14:21.809 "data_size": 63488 00:14:21.809 }, 00:14:21.809 { 00:14:21.809 "name": "BaseBdev3", 00:14:21.809 "uuid": "776b96de-634c-4916-9d52-56b97b042788", 00:14:21.809 "is_configured": true, 00:14:21.809 "data_offset": 2048, 00:14:21.809 "data_size": 63488 00:14:21.809 } 00:14:21.809 ] 00:14:21.809 }' 00:14:21.809 20:09:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:22.068 20:09:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.326 20:09:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:22.326 20:09:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:22.326 20:09:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:22.326 20:09:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.326 20:09:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.326 20:09:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:22.326 20:09:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.326 20:09:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:22.326 20:09:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:22.326 20:09:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:22.326 20:09:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.326 20:09:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.326 [2024-12-08 20:09:54.186009] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:22.326 [2024-12-08 20:09:54.186248] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:22.327 [2024-12-08 20:09:54.277176] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:22.327 20:09:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.327 20:09:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:22.327 20:09:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:22.327 20:09:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:22.327 20:09:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:22.327 20:09:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.327 20:09:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.327 20:09:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.586 20:09:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:22.586 20:09:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:22.586 20:09:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:14:22.586 20:09:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.586 20:09:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.586 [2024-12-08 20:09:54.321111] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:22.586 [2024-12-08 20:09:54.321155] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:14:22.586 20:09:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.586 20:09:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:22.586 20:09:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:22.586 20:09:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:22.586 20:09:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:22.586 20:09:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.586 20:09:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.586 20:09:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.586 20:09:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:22.586 20:09:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:22.586 20:09:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:14:22.586 20:09:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:14:22.586 20:09:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:22.586 20:09:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:22.586 20:09:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.586 20:09:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.586 BaseBdev2 00:14:22.586 20:09:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.586 20:09:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:14:22.586 20:09:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:22.586 20:09:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:22.586 20:09:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:22.586 20:09:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:22.586 20:09:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:22.586 20:09:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:22.586 20:09:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.586 20:09:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.586 20:09:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.586 20:09:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:22.586 20:09:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.586 20:09:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.586 [ 00:14:22.586 { 00:14:22.586 "name": "BaseBdev2", 00:14:22.586 "aliases": [ 00:14:22.586 "ecd92aeb-90fe-4311-beec-1defb1e71e34" 00:14:22.586 ], 00:14:22.586 "product_name": "Malloc disk", 00:14:22.586 "block_size": 512, 00:14:22.586 "num_blocks": 65536, 00:14:22.586 "uuid": "ecd92aeb-90fe-4311-beec-1defb1e71e34", 00:14:22.586 "assigned_rate_limits": { 00:14:22.586 "rw_ios_per_sec": 0, 00:14:22.586 "rw_mbytes_per_sec": 0, 00:14:22.586 "r_mbytes_per_sec": 0, 00:14:22.586 "w_mbytes_per_sec": 0 00:14:22.586 }, 00:14:22.586 "claimed": false, 00:14:22.586 "zoned": false, 00:14:22.586 "supported_io_types": { 00:14:22.586 "read": true, 00:14:22.586 "write": true, 00:14:22.586 "unmap": true, 00:14:22.586 "flush": true, 00:14:22.586 "reset": true, 00:14:22.586 "nvme_admin": false, 00:14:22.586 "nvme_io": false, 00:14:22.586 "nvme_io_md": false, 00:14:22.586 "write_zeroes": true, 00:14:22.586 "zcopy": true, 00:14:22.586 "get_zone_info": false, 00:14:22.586 "zone_management": false, 00:14:22.586 "zone_append": false, 00:14:22.586 "compare": false, 00:14:22.586 "compare_and_write": false, 00:14:22.586 "abort": true, 00:14:22.586 "seek_hole": false, 00:14:22.586 "seek_data": false, 00:14:22.586 "copy": true, 00:14:22.586 "nvme_iov_md": false 00:14:22.586 }, 00:14:22.586 "memory_domains": [ 00:14:22.586 { 00:14:22.586 "dma_device_id": "system", 00:14:22.586 "dma_device_type": 1 00:14:22.586 }, 00:14:22.586 { 00:14:22.586 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:22.586 "dma_device_type": 2 00:14:22.586 } 00:14:22.586 ], 00:14:22.586 "driver_specific": {} 00:14:22.586 } 00:14:22.586 ] 00:14:22.586 20:09:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.586 20:09:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:22.586 20:09:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:22.586 20:09:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:22.586 20:09:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:22.586 20:09:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.586 20:09:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.846 BaseBdev3 00:14:22.846 20:09:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.846 20:09:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:14:22.846 20:09:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:22.846 20:09:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:22.846 20:09:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:22.846 20:09:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:22.846 20:09:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:22.846 20:09:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:22.846 20:09:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.846 20:09:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.846 20:09:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.846 20:09:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:22.846 20:09:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.846 20:09:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.846 [ 00:14:22.846 { 00:14:22.846 "name": "BaseBdev3", 00:14:22.846 "aliases": [ 00:14:22.846 "5720edba-2b27-4651-ba45-cbe33907d113" 00:14:22.846 ], 00:14:22.846 "product_name": "Malloc disk", 00:14:22.846 "block_size": 512, 00:14:22.846 "num_blocks": 65536, 00:14:22.846 "uuid": "5720edba-2b27-4651-ba45-cbe33907d113", 00:14:22.846 "assigned_rate_limits": { 00:14:22.846 "rw_ios_per_sec": 0, 00:14:22.846 "rw_mbytes_per_sec": 0, 00:14:22.846 "r_mbytes_per_sec": 0, 00:14:22.846 "w_mbytes_per_sec": 0 00:14:22.846 }, 00:14:22.846 "claimed": false, 00:14:22.846 "zoned": false, 00:14:22.846 "supported_io_types": { 00:14:22.846 "read": true, 00:14:22.846 "write": true, 00:14:22.846 "unmap": true, 00:14:22.846 "flush": true, 00:14:22.846 "reset": true, 00:14:22.846 "nvme_admin": false, 00:14:22.846 "nvme_io": false, 00:14:22.846 "nvme_io_md": false, 00:14:22.846 "write_zeroes": true, 00:14:22.846 "zcopy": true, 00:14:22.846 "get_zone_info": false, 00:14:22.846 "zone_management": false, 00:14:22.846 "zone_append": false, 00:14:22.846 "compare": false, 00:14:22.846 "compare_and_write": false, 00:14:22.846 "abort": true, 00:14:22.846 "seek_hole": false, 00:14:22.846 "seek_data": false, 00:14:22.846 "copy": true, 00:14:22.846 "nvme_iov_md": false 00:14:22.846 }, 00:14:22.846 "memory_domains": [ 00:14:22.846 { 00:14:22.846 "dma_device_id": "system", 00:14:22.846 "dma_device_type": 1 00:14:22.846 }, 00:14:22.846 { 00:14:22.846 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:22.847 "dma_device_type": 2 00:14:22.847 } 00:14:22.847 ], 00:14:22.847 "driver_specific": {} 00:14:22.847 } 00:14:22.847 ] 00:14:22.847 20:09:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.847 20:09:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:22.847 20:09:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:22.847 20:09:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:22.847 20:09:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:22.847 20:09:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.847 20:09:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.847 [2024-12-08 20:09:54.624815] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:22.847 [2024-12-08 20:09:54.624859] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:22.847 [2024-12-08 20:09:54.624878] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:22.847 [2024-12-08 20:09:54.626573] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:22.847 20:09:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.847 20:09:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:22.847 20:09:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:22.847 20:09:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:22.847 20:09:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:22.847 20:09:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:22.847 20:09:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:22.847 20:09:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:22.847 20:09:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:22.847 20:09:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:22.847 20:09:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:22.847 20:09:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:22.847 20:09:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:22.847 20:09:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.847 20:09:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.847 20:09:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.847 20:09:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:22.847 "name": "Existed_Raid", 00:14:22.847 "uuid": "d6b9ebd0-88ca-4110-842a-2aa4a57e4ed1", 00:14:22.847 "strip_size_kb": 64, 00:14:22.847 "state": "configuring", 00:14:22.847 "raid_level": "raid5f", 00:14:22.847 "superblock": true, 00:14:22.847 "num_base_bdevs": 3, 00:14:22.847 "num_base_bdevs_discovered": 2, 00:14:22.847 "num_base_bdevs_operational": 3, 00:14:22.847 "base_bdevs_list": [ 00:14:22.847 { 00:14:22.847 "name": "BaseBdev1", 00:14:22.847 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:22.847 "is_configured": false, 00:14:22.847 "data_offset": 0, 00:14:22.847 "data_size": 0 00:14:22.847 }, 00:14:22.847 { 00:14:22.847 "name": "BaseBdev2", 00:14:22.847 "uuid": "ecd92aeb-90fe-4311-beec-1defb1e71e34", 00:14:22.847 "is_configured": true, 00:14:22.847 "data_offset": 2048, 00:14:22.847 "data_size": 63488 00:14:22.847 }, 00:14:22.847 { 00:14:22.847 "name": "BaseBdev3", 00:14:22.847 "uuid": "5720edba-2b27-4651-ba45-cbe33907d113", 00:14:22.847 "is_configured": true, 00:14:22.847 "data_offset": 2048, 00:14:22.847 "data_size": 63488 00:14:22.847 } 00:14:22.847 ] 00:14:22.847 }' 00:14:22.847 20:09:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:22.847 20:09:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.107 20:09:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:23.107 20:09:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.107 20:09:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.107 [2024-12-08 20:09:55.044086] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:23.107 20:09:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.107 20:09:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:23.107 20:09:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:23.107 20:09:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:23.107 20:09:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:23.107 20:09:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:23.107 20:09:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:23.107 20:09:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:23.107 20:09:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:23.107 20:09:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:23.107 20:09:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:23.107 20:09:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:23.107 20:09:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:23.107 20:09:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.107 20:09:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.107 20:09:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.365 20:09:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:23.365 "name": "Existed_Raid", 00:14:23.365 "uuid": "d6b9ebd0-88ca-4110-842a-2aa4a57e4ed1", 00:14:23.365 "strip_size_kb": 64, 00:14:23.365 "state": "configuring", 00:14:23.365 "raid_level": "raid5f", 00:14:23.365 "superblock": true, 00:14:23.365 "num_base_bdevs": 3, 00:14:23.365 "num_base_bdevs_discovered": 1, 00:14:23.365 "num_base_bdevs_operational": 3, 00:14:23.365 "base_bdevs_list": [ 00:14:23.365 { 00:14:23.365 "name": "BaseBdev1", 00:14:23.365 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:23.365 "is_configured": false, 00:14:23.365 "data_offset": 0, 00:14:23.365 "data_size": 0 00:14:23.365 }, 00:14:23.365 { 00:14:23.365 "name": null, 00:14:23.365 "uuid": "ecd92aeb-90fe-4311-beec-1defb1e71e34", 00:14:23.365 "is_configured": false, 00:14:23.365 "data_offset": 0, 00:14:23.365 "data_size": 63488 00:14:23.365 }, 00:14:23.365 { 00:14:23.365 "name": "BaseBdev3", 00:14:23.365 "uuid": "5720edba-2b27-4651-ba45-cbe33907d113", 00:14:23.365 "is_configured": true, 00:14:23.365 "data_offset": 2048, 00:14:23.365 "data_size": 63488 00:14:23.365 } 00:14:23.365 ] 00:14:23.365 }' 00:14:23.365 20:09:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:23.365 20:09:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.624 20:09:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:23.624 20:09:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.624 20:09:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:23.624 20:09:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.624 20:09:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.624 20:09:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:14:23.624 20:09:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:23.624 20:09:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.624 20:09:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.624 [2024-12-08 20:09:55.515938] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:23.624 BaseBdev1 00:14:23.624 20:09:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.624 20:09:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:14:23.624 20:09:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:23.624 20:09:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:23.624 20:09:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:23.624 20:09:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:23.624 20:09:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:23.624 20:09:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:23.624 20:09:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.624 20:09:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.624 20:09:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.624 20:09:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:23.624 20:09:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.624 20:09:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.624 [ 00:14:23.624 { 00:14:23.624 "name": "BaseBdev1", 00:14:23.624 "aliases": [ 00:14:23.624 "883ef2ca-c75c-409b-adc5-6f2efb8c060c" 00:14:23.624 ], 00:14:23.624 "product_name": "Malloc disk", 00:14:23.624 "block_size": 512, 00:14:23.624 "num_blocks": 65536, 00:14:23.624 "uuid": "883ef2ca-c75c-409b-adc5-6f2efb8c060c", 00:14:23.624 "assigned_rate_limits": { 00:14:23.624 "rw_ios_per_sec": 0, 00:14:23.624 "rw_mbytes_per_sec": 0, 00:14:23.624 "r_mbytes_per_sec": 0, 00:14:23.624 "w_mbytes_per_sec": 0 00:14:23.624 }, 00:14:23.624 "claimed": true, 00:14:23.624 "claim_type": "exclusive_write", 00:14:23.624 "zoned": false, 00:14:23.624 "supported_io_types": { 00:14:23.624 "read": true, 00:14:23.624 "write": true, 00:14:23.624 "unmap": true, 00:14:23.624 "flush": true, 00:14:23.624 "reset": true, 00:14:23.624 "nvme_admin": false, 00:14:23.624 "nvme_io": false, 00:14:23.624 "nvme_io_md": false, 00:14:23.624 "write_zeroes": true, 00:14:23.624 "zcopy": true, 00:14:23.624 "get_zone_info": false, 00:14:23.624 "zone_management": false, 00:14:23.624 "zone_append": false, 00:14:23.624 "compare": false, 00:14:23.624 "compare_and_write": false, 00:14:23.624 "abort": true, 00:14:23.624 "seek_hole": false, 00:14:23.624 "seek_data": false, 00:14:23.624 "copy": true, 00:14:23.624 "nvme_iov_md": false 00:14:23.624 }, 00:14:23.624 "memory_domains": [ 00:14:23.624 { 00:14:23.624 "dma_device_id": "system", 00:14:23.624 "dma_device_type": 1 00:14:23.624 }, 00:14:23.624 { 00:14:23.624 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:23.624 "dma_device_type": 2 00:14:23.624 } 00:14:23.624 ], 00:14:23.624 "driver_specific": {} 00:14:23.624 } 00:14:23.624 ] 00:14:23.624 20:09:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.624 20:09:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:23.624 20:09:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:23.624 20:09:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:23.624 20:09:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:23.624 20:09:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:23.624 20:09:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:23.624 20:09:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:23.624 20:09:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:23.624 20:09:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:23.624 20:09:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:23.624 20:09:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:23.624 20:09:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:23.624 20:09:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.624 20:09:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.624 20:09:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:23.624 20:09:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.882 20:09:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:23.882 "name": "Existed_Raid", 00:14:23.882 "uuid": "d6b9ebd0-88ca-4110-842a-2aa4a57e4ed1", 00:14:23.882 "strip_size_kb": 64, 00:14:23.882 "state": "configuring", 00:14:23.882 "raid_level": "raid5f", 00:14:23.882 "superblock": true, 00:14:23.882 "num_base_bdevs": 3, 00:14:23.882 "num_base_bdevs_discovered": 2, 00:14:23.882 "num_base_bdevs_operational": 3, 00:14:23.882 "base_bdevs_list": [ 00:14:23.882 { 00:14:23.882 "name": "BaseBdev1", 00:14:23.882 "uuid": "883ef2ca-c75c-409b-adc5-6f2efb8c060c", 00:14:23.882 "is_configured": true, 00:14:23.882 "data_offset": 2048, 00:14:23.882 "data_size": 63488 00:14:23.882 }, 00:14:23.882 { 00:14:23.882 "name": null, 00:14:23.882 "uuid": "ecd92aeb-90fe-4311-beec-1defb1e71e34", 00:14:23.882 "is_configured": false, 00:14:23.882 "data_offset": 0, 00:14:23.882 "data_size": 63488 00:14:23.882 }, 00:14:23.882 { 00:14:23.882 "name": "BaseBdev3", 00:14:23.882 "uuid": "5720edba-2b27-4651-ba45-cbe33907d113", 00:14:23.882 "is_configured": true, 00:14:23.882 "data_offset": 2048, 00:14:23.882 "data_size": 63488 00:14:23.882 } 00:14:23.882 ] 00:14:23.882 }' 00:14:23.882 20:09:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:23.882 20:09:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.140 20:09:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:24.140 20:09:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.140 20:09:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.140 20:09:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.140 20:09:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.140 20:09:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:14:24.140 20:09:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:14:24.140 20:09:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.140 20:09:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.140 [2024-12-08 20:09:56.003128] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:24.140 20:09:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.140 20:09:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:24.140 20:09:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:24.140 20:09:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:24.140 20:09:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:24.140 20:09:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:24.140 20:09:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:24.140 20:09:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:24.140 20:09:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:24.140 20:09:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:24.140 20:09:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:24.140 20:09:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:24.140 20:09:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.140 20:09:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.140 20:09:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.140 20:09:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.140 20:09:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:24.140 "name": "Existed_Raid", 00:14:24.140 "uuid": "d6b9ebd0-88ca-4110-842a-2aa4a57e4ed1", 00:14:24.140 "strip_size_kb": 64, 00:14:24.140 "state": "configuring", 00:14:24.140 "raid_level": "raid5f", 00:14:24.140 "superblock": true, 00:14:24.140 "num_base_bdevs": 3, 00:14:24.140 "num_base_bdevs_discovered": 1, 00:14:24.140 "num_base_bdevs_operational": 3, 00:14:24.140 "base_bdevs_list": [ 00:14:24.140 { 00:14:24.140 "name": "BaseBdev1", 00:14:24.140 "uuid": "883ef2ca-c75c-409b-adc5-6f2efb8c060c", 00:14:24.140 "is_configured": true, 00:14:24.140 "data_offset": 2048, 00:14:24.140 "data_size": 63488 00:14:24.140 }, 00:14:24.140 { 00:14:24.140 "name": null, 00:14:24.140 "uuid": "ecd92aeb-90fe-4311-beec-1defb1e71e34", 00:14:24.140 "is_configured": false, 00:14:24.140 "data_offset": 0, 00:14:24.140 "data_size": 63488 00:14:24.140 }, 00:14:24.140 { 00:14:24.140 "name": null, 00:14:24.140 "uuid": "5720edba-2b27-4651-ba45-cbe33907d113", 00:14:24.140 "is_configured": false, 00:14:24.140 "data_offset": 0, 00:14:24.140 "data_size": 63488 00:14:24.140 } 00:14:24.140 ] 00:14:24.140 }' 00:14:24.140 20:09:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:24.140 20:09:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.706 20:09:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.706 20:09:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:24.706 20:09:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.706 20:09:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.706 20:09:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.706 20:09:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:14:24.706 20:09:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:24.706 20:09:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.706 20:09:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.706 [2024-12-08 20:09:56.446413] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:24.706 20:09:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.706 20:09:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:24.706 20:09:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:24.706 20:09:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:24.706 20:09:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:24.706 20:09:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:24.706 20:09:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:24.706 20:09:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:24.706 20:09:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:24.706 20:09:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:24.706 20:09:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:24.706 20:09:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:24.706 20:09:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.707 20:09:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.707 20:09:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.707 20:09:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.707 20:09:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:24.707 "name": "Existed_Raid", 00:14:24.707 "uuid": "d6b9ebd0-88ca-4110-842a-2aa4a57e4ed1", 00:14:24.707 "strip_size_kb": 64, 00:14:24.707 "state": "configuring", 00:14:24.707 "raid_level": "raid5f", 00:14:24.707 "superblock": true, 00:14:24.707 "num_base_bdevs": 3, 00:14:24.707 "num_base_bdevs_discovered": 2, 00:14:24.707 "num_base_bdevs_operational": 3, 00:14:24.707 "base_bdevs_list": [ 00:14:24.707 { 00:14:24.707 "name": "BaseBdev1", 00:14:24.707 "uuid": "883ef2ca-c75c-409b-adc5-6f2efb8c060c", 00:14:24.707 "is_configured": true, 00:14:24.707 "data_offset": 2048, 00:14:24.707 "data_size": 63488 00:14:24.707 }, 00:14:24.707 { 00:14:24.707 "name": null, 00:14:24.707 "uuid": "ecd92aeb-90fe-4311-beec-1defb1e71e34", 00:14:24.707 "is_configured": false, 00:14:24.707 "data_offset": 0, 00:14:24.707 "data_size": 63488 00:14:24.707 }, 00:14:24.707 { 00:14:24.707 "name": "BaseBdev3", 00:14:24.707 "uuid": "5720edba-2b27-4651-ba45-cbe33907d113", 00:14:24.707 "is_configured": true, 00:14:24.707 "data_offset": 2048, 00:14:24.707 "data_size": 63488 00:14:24.707 } 00:14:24.707 ] 00:14:24.707 }' 00:14:24.707 20:09:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:24.707 20:09:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.965 20:09:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.965 20:09:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.965 20:09:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.965 20:09:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:24.965 20:09:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.965 20:09:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:14:24.965 20:09:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:24.965 20:09:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.965 20:09:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.965 [2024-12-08 20:09:56.925599] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:25.224 20:09:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.224 20:09:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:25.224 20:09:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:25.224 20:09:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:25.224 20:09:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:25.224 20:09:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:25.224 20:09:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:25.224 20:09:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:25.224 20:09:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:25.224 20:09:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:25.224 20:09:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:25.224 20:09:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:25.224 20:09:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:25.224 20:09:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.224 20:09:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.224 20:09:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.224 20:09:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:25.224 "name": "Existed_Raid", 00:14:25.224 "uuid": "d6b9ebd0-88ca-4110-842a-2aa4a57e4ed1", 00:14:25.224 "strip_size_kb": 64, 00:14:25.224 "state": "configuring", 00:14:25.224 "raid_level": "raid5f", 00:14:25.224 "superblock": true, 00:14:25.224 "num_base_bdevs": 3, 00:14:25.224 "num_base_bdevs_discovered": 1, 00:14:25.224 "num_base_bdevs_operational": 3, 00:14:25.224 "base_bdevs_list": [ 00:14:25.224 { 00:14:25.224 "name": null, 00:14:25.224 "uuid": "883ef2ca-c75c-409b-adc5-6f2efb8c060c", 00:14:25.224 "is_configured": false, 00:14:25.224 "data_offset": 0, 00:14:25.224 "data_size": 63488 00:14:25.224 }, 00:14:25.224 { 00:14:25.224 "name": null, 00:14:25.224 "uuid": "ecd92aeb-90fe-4311-beec-1defb1e71e34", 00:14:25.224 "is_configured": false, 00:14:25.224 "data_offset": 0, 00:14:25.224 "data_size": 63488 00:14:25.224 }, 00:14:25.224 { 00:14:25.224 "name": "BaseBdev3", 00:14:25.224 "uuid": "5720edba-2b27-4651-ba45-cbe33907d113", 00:14:25.224 "is_configured": true, 00:14:25.224 "data_offset": 2048, 00:14:25.224 "data_size": 63488 00:14:25.224 } 00:14:25.224 ] 00:14:25.224 }' 00:14:25.224 20:09:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:25.224 20:09:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.483 20:09:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:25.483 20:09:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:25.483 20:09:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.483 20:09:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.483 20:09:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.483 20:09:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:14:25.483 20:09:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:25.483 20:09:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.483 20:09:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.483 [2024-12-08 20:09:57.451889] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:25.483 20:09:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.483 20:09:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:25.483 20:09:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:25.741 20:09:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:25.741 20:09:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:25.741 20:09:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:25.741 20:09:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:25.741 20:09:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:25.741 20:09:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:25.741 20:09:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:25.741 20:09:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:25.741 20:09:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:25.741 20:09:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:25.742 20:09:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.742 20:09:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.742 20:09:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.742 20:09:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:25.742 "name": "Existed_Raid", 00:14:25.742 "uuid": "d6b9ebd0-88ca-4110-842a-2aa4a57e4ed1", 00:14:25.742 "strip_size_kb": 64, 00:14:25.742 "state": "configuring", 00:14:25.742 "raid_level": "raid5f", 00:14:25.742 "superblock": true, 00:14:25.742 "num_base_bdevs": 3, 00:14:25.742 "num_base_bdevs_discovered": 2, 00:14:25.742 "num_base_bdevs_operational": 3, 00:14:25.742 "base_bdevs_list": [ 00:14:25.742 { 00:14:25.742 "name": null, 00:14:25.742 "uuid": "883ef2ca-c75c-409b-adc5-6f2efb8c060c", 00:14:25.742 "is_configured": false, 00:14:25.742 "data_offset": 0, 00:14:25.742 "data_size": 63488 00:14:25.742 }, 00:14:25.742 { 00:14:25.742 "name": "BaseBdev2", 00:14:25.742 "uuid": "ecd92aeb-90fe-4311-beec-1defb1e71e34", 00:14:25.742 "is_configured": true, 00:14:25.742 "data_offset": 2048, 00:14:25.742 "data_size": 63488 00:14:25.742 }, 00:14:25.742 { 00:14:25.742 "name": "BaseBdev3", 00:14:25.742 "uuid": "5720edba-2b27-4651-ba45-cbe33907d113", 00:14:25.742 "is_configured": true, 00:14:25.742 "data_offset": 2048, 00:14:25.742 "data_size": 63488 00:14:25.742 } 00:14:25.742 ] 00:14:25.742 }' 00:14:25.742 20:09:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:25.742 20:09:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.001 20:09:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:26.001 20:09:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.001 20:09:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.001 20:09:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.001 20:09:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.001 20:09:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:14:26.001 20:09:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.001 20:09:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:26.001 20:09:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.001 20:09:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.001 20:09:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.260 20:09:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 883ef2ca-c75c-409b-adc5-6f2efb8c060c 00:14:26.260 20:09:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.260 20:09:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.260 [2024-12-08 20:09:58.017785] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:26.260 [2024-12-08 20:09:58.018107] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:14:26.260 [2024-12-08 20:09:58.018159] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:26.260 [2024-12-08 20:09:58.018438] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:26.260 NewBaseBdev 00:14:26.260 20:09:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.260 20:09:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:14:26.260 20:09:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:14:26.260 20:09:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:26.260 20:09:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:26.260 20:09:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:26.260 20:09:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:26.260 20:09:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:26.260 20:09:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.260 20:09:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.260 [2024-12-08 20:09:58.023768] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:14:26.260 [2024-12-08 20:09:58.023822] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:14:26.260 [2024-12-08 20:09:58.024099] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:26.260 20:09:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.260 20:09:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:26.260 20:09:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.260 20:09:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.260 [ 00:14:26.260 { 00:14:26.260 "name": "NewBaseBdev", 00:14:26.260 "aliases": [ 00:14:26.260 "883ef2ca-c75c-409b-adc5-6f2efb8c060c" 00:14:26.260 ], 00:14:26.260 "product_name": "Malloc disk", 00:14:26.260 "block_size": 512, 00:14:26.260 "num_blocks": 65536, 00:14:26.260 "uuid": "883ef2ca-c75c-409b-adc5-6f2efb8c060c", 00:14:26.260 "assigned_rate_limits": { 00:14:26.260 "rw_ios_per_sec": 0, 00:14:26.260 "rw_mbytes_per_sec": 0, 00:14:26.261 "r_mbytes_per_sec": 0, 00:14:26.261 "w_mbytes_per_sec": 0 00:14:26.261 }, 00:14:26.261 "claimed": true, 00:14:26.261 "claim_type": "exclusive_write", 00:14:26.261 "zoned": false, 00:14:26.261 "supported_io_types": { 00:14:26.261 "read": true, 00:14:26.261 "write": true, 00:14:26.261 "unmap": true, 00:14:26.261 "flush": true, 00:14:26.261 "reset": true, 00:14:26.261 "nvme_admin": false, 00:14:26.261 "nvme_io": false, 00:14:26.261 "nvme_io_md": false, 00:14:26.261 "write_zeroes": true, 00:14:26.261 "zcopy": true, 00:14:26.261 "get_zone_info": false, 00:14:26.261 "zone_management": false, 00:14:26.261 "zone_append": false, 00:14:26.261 "compare": false, 00:14:26.261 "compare_and_write": false, 00:14:26.261 "abort": true, 00:14:26.261 "seek_hole": false, 00:14:26.261 "seek_data": false, 00:14:26.261 "copy": true, 00:14:26.261 "nvme_iov_md": false 00:14:26.261 }, 00:14:26.261 "memory_domains": [ 00:14:26.261 { 00:14:26.261 "dma_device_id": "system", 00:14:26.261 "dma_device_type": 1 00:14:26.261 }, 00:14:26.261 { 00:14:26.261 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:26.261 "dma_device_type": 2 00:14:26.261 } 00:14:26.261 ], 00:14:26.261 "driver_specific": {} 00:14:26.261 } 00:14:26.261 ] 00:14:26.261 20:09:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.261 20:09:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:26.261 20:09:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:14:26.261 20:09:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:26.261 20:09:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:26.261 20:09:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:26.261 20:09:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:26.261 20:09:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:26.261 20:09:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:26.261 20:09:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:26.261 20:09:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:26.261 20:09:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:26.261 20:09:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.261 20:09:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:26.261 20:09:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.261 20:09:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.261 20:09:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.261 20:09:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:26.261 "name": "Existed_Raid", 00:14:26.261 "uuid": "d6b9ebd0-88ca-4110-842a-2aa4a57e4ed1", 00:14:26.261 "strip_size_kb": 64, 00:14:26.261 "state": "online", 00:14:26.261 "raid_level": "raid5f", 00:14:26.261 "superblock": true, 00:14:26.261 "num_base_bdevs": 3, 00:14:26.261 "num_base_bdevs_discovered": 3, 00:14:26.261 "num_base_bdevs_operational": 3, 00:14:26.261 "base_bdevs_list": [ 00:14:26.261 { 00:14:26.261 "name": "NewBaseBdev", 00:14:26.261 "uuid": "883ef2ca-c75c-409b-adc5-6f2efb8c060c", 00:14:26.261 "is_configured": true, 00:14:26.261 "data_offset": 2048, 00:14:26.261 "data_size": 63488 00:14:26.261 }, 00:14:26.261 { 00:14:26.261 "name": "BaseBdev2", 00:14:26.261 "uuid": "ecd92aeb-90fe-4311-beec-1defb1e71e34", 00:14:26.261 "is_configured": true, 00:14:26.261 "data_offset": 2048, 00:14:26.261 "data_size": 63488 00:14:26.261 }, 00:14:26.261 { 00:14:26.261 "name": "BaseBdev3", 00:14:26.261 "uuid": "5720edba-2b27-4651-ba45-cbe33907d113", 00:14:26.261 "is_configured": true, 00:14:26.261 "data_offset": 2048, 00:14:26.261 "data_size": 63488 00:14:26.261 } 00:14:26.261 ] 00:14:26.261 }' 00:14:26.261 20:09:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:26.261 20:09:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.521 20:09:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:14:26.521 20:09:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:26.521 20:09:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:26.521 20:09:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:26.521 20:09:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:14:26.521 20:09:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:26.521 20:09:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:26.521 20:09:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:26.521 20:09:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.521 20:09:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.521 [2024-12-08 20:09:58.453670] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:26.521 20:09:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.521 20:09:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:26.521 "name": "Existed_Raid", 00:14:26.521 "aliases": [ 00:14:26.521 "d6b9ebd0-88ca-4110-842a-2aa4a57e4ed1" 00:14:26.521 ], 00:14:26.521 "product_name": "Raid Volume", 00:14:26.521 "block_size": 512, 00:14:26.521 "num_blocks": 126976, 00:14:26.521 "uuid": "d6b9ebd0-88ca-4110-842a-2aa4a57e4ed1", 00:14:26.521 "assigned_rate_limits": { 00:14:26.521 "rw_ios_per_sec": 0, 00:14:26.521 "rw_mbytes_per_sec": 0, 00:14:26.521 "r_mbytes_per_sec": 0, 00:14:26.521 "w_mbytes_per_sec": 0 00:14:26.521 }, 00:14:26.521 "claimed": false, 00:14:26.521 "zoned": false, 00:14:26.521 "supported_io_types": { 00:14:26.521 "read": true, 00:14:26.521 "write": true, 00:14:26.521 "unmap": false, 00:14:26.521 "flush": false, 00:14:26.521 "reset": true, 00:14:26.521 "nvme_admin": false, 00:14:26.521 "nvme_io": false, 00:14:26.521 "nvme_io_md": false, 00:14:26.521 "write_zeroes": true, 00:14:26.521 "zcopy": false, 00:14:26.521 "get_zone_info": false, 00:14:26.521 "zone_management": false, 00:14:26.521 "zone_append": false, 00:14:26.522 "compare": false, 00:14:26.522 "compare_and_write": false, 00:14:26.522 "abort": false, 00:14:26.522 "seek_hole": false, 00:14:26.522 "seek_data": false, 00:14:26.522 "copy": false, 00:14:26.522 "nvme_iov_md": false 00:14:26.522 }, 00:14:26.522 "driver_specific": { 00:14:26.522 "raid": { 00:14:26.522 "uuid": "d6b9ebd0-88ca-4110-842a-2aa4a57e4ed1", 00:14:26.522 "strip_size_kb": 64, 00:14:26.522 "state": "online", 00:14:26.522 "raid_level": "raid5f", 00:14:26.522 "superblock": true, 00:14:26.522 "num_base_bdevs": 3, 00:14:26.522 "num_base_bdevs_discovered": 3, 00:14:26.522 "num_base_bdevs_operational": 3, 00:14:26.522 "base_bdevs_list": [ 00:14:26.522 { 00:14:26.522 "name": "NewBaseBdev", 00:14:26.522 "uuid": "883ef2ca-c75c-409b-adc5-6f2efb8c060c", 00:14:26.522 "is_configured": true, 00:14:26.522 "data_offset": 2048, 00:14:26.522 "data_size": 63488 00:14:26.522 }, 00:14:26.522 { 00:14:26.522 "name": "BaseBdev2", 00:14:26.522 "uuid": "ecd92aeb-90fe-4311-beec-1defb1e71e34", 00:14:26.522 "is_configured": true, 00:14:26.522 "data_offset": 2048, 00:14:26.522 "data_size": 63488 00:14:26.522 }, 00:14:26.522 { 00:14:26.522 "name": "BaseBdev3", 00:14:26.522 "uuid": "5720edba-2b27-4651-ba45-cbe33907d113", 00:14:26.522 "is_configured": true, 00:14:26.522 "data_offset": 2048, 00:14:26.522 "data_size": 63488 00:14:26.522 } 00:14:26.522 ] 00:14:26.522 } 00:14:26.522 } 00:14:26.522 }' 00:14:26.522 20:09:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:26.782 20:09:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:14:26.782 BaseBdev2 00:14:26.782 BaseBdev3' 00:14:26.782 20:09:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:26.782 20:09:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:26.782 20:09:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:26.782 20:09:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:26.782 20:09:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:14:26.782 20:09:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.782 20:09:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.782 20:09:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.782 20:09:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:26.782 20:09:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:26.782 20:09:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:26.782 20:09:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:26.782 20:09:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:26.782 20:09:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.782 20:09:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.782 20:09:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.782 20:09:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:26.782 20:09:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:26.782 20:09:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:26.782 20:09:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:26.782 20:09:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.782 20:09:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.782 20:09:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:26.782 20:09:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.782 20:09:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:26.782 20:09:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:26.782 20:09:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:26.782 20:09:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.782 20:09:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.782 [2024-12-08 20:09:58.685093] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:26.782 [2024-12-08 20:09:58.685115] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:26.782 [2024-12-08 20:09:58.685178] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:26.782 [2024-12-08 20:09:58.685446] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:26.782 [2024-12-08 20:09:58.685458] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:14:26.782 20:09:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.782 20:09:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 80212 00:14:26.782 20:09:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 80212 ']' 00:14:26.782 20:09:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 80212 00:14:26.782 20:09:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:14:26.782 20:09:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:26.782 20:09:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80212 00:14:26.782 20:09:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:26.782 20:09:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:26.782 20:09:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80212' 00:14:26.782 killing process with pid 80212 00:14:26.782 20:09:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 80212 00:14:26.783 [2024-12-08 20:09:58.721857] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:26.783 20:09:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 80212 00:14:27.043 [2024-12-08 20:09:59.014404] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:28.425 ************************************ 00:14:28.425 END TEST raid5f_state_function_test_sb 00:14:28.425 ************************************ 00:14:28.425 20:10:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:14:28.425 00:14:28.425 real 0m10.104s 00:14:28.425 user 0m16.025s 00:14:28.425 sys 0m1.715s 00:14:28.425 20:10:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:28.425 20:10:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.425 20:10:00 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:14:28.425 20:10:00 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:14:28.425 20:10:00 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:28.425 20:10:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:28.425 ************************************ 00:14:28.425 START TEST raid5f_superblock_test 00:14:28.425 ************************************ 00:14:28.425 20:10:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid5f 3 00:14:28.425 20:10:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:14:28.425 20:10:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:14:28.425 20:10:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:14:28.425 20:10:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:14:28.425 20:10:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:14:28.425 20:10:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:14:28.425 20:10:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:14:28.425 20:10:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:14:28.425 20:10:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:14:28.425 20:10:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:14:28.425 20:10:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:14:28.425 20:10:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:14:28.425 20:10:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:14:28.425 20:10:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:14:28.425 20:10:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:14:28.425 20:10:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:14:28.425 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:28.425 20:10:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=80828 00:14:28.425 20:10:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:14:28.425 20:10:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 80828 00:14:28.425 20:10:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 80828 ']' 00:14:28.425 20:10:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:28.425 20:10:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:28.425 20:10:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:28.425 20:10:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:28.425 20:10:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.425 [2024-12-08 20:10:00.247465] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:14:28.425 [2024-12-08 20:10:00.247671] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80828 ] 00:14:28.686 [2024-12-08 20:10:00.421128] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:28.686 [2024-12-08 20:10:00.523044] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:28.947 [2024-12-08 20:10:00.706896] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:28.947 [2024-12-08 20:10:00.707058] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:29.206 20:10:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:29.206 20:10:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:14:29.206 20:10:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:14:29.206 20:10:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:29.206 20:10:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:14:29.206 20:10:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:14:29.206 20:10:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:14:29.206 20:10:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:29.206 20:10:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:29.206 20:10:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:29.206 20:10:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:14:29.206 20:10:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.206 20:10:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.206 malloc1 00:14:29.206 20:10:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.206 20:10:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:29.206 20:10:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.206 20:10:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.206 [2024-12-08 20:10:01.104713] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:29.207 [2024-12-08 20:10:01.104833] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:29.207 [2024-12-08 20:10:01.104872] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:29.207 [2024-12-08 20:10:01.104912] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:29.207 [2024-12-08 20:10:01.106984] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:29.207 [2024-12-08 20:10:01.107059] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:29.207 pt1 00:14:29.207 20:10:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.207 20:10:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:29.207 20:10:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:29.207 20:10:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:14:29.207 20:10:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:14:29.207 20:10:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:14:29.207 20:10:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:29.207 20:10:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:29.207 20:10:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:29.207 20:10:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:14:29.207 20:10:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.207 20:10:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.207 malloc2 00:14:29.207 20:10:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.207 20:10:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:29.207 20:10:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.207 20:10:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.207 [2024-12-08 20:10:01.161431] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:29.207 [2024-12-08 20:10:01.161517] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:29.207 [2024-12-08 20:10:01.161558] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:29.207 [2024-12-08 20:10:01.161586] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:29.207 [2024-12-08 20:10:01.163599] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:29.207 [2024-12-08 20:10:01.163664] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:29.207 pt2 00:14:29.207 20:10:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.207 20:10:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:29.207 20:10:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:29.207 20:10:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:14:29.207 20:10:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:14:29.207 20:10:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:14:29.207 20:10:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:29.207 20:10:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:29.207 20:10:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:29.207 20:10:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:14:29.207 20:10:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.207 20:10:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.465 malloc3 00:14:29.465 20:10:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.465 20:10:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:29.465 20:10:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.465 20:10:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.465 [2024-12-08 20:10:01.252785] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:29.465 [2024-12-08 20:10:01.252834] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:29.465 [2024-12-08 20:10:01.252853] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:29.465 [2024-12-08 20:10:01.252861] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:29.465 [2024-12-08 20:10:01.254841] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:29.465 [2024-12-08 20:10:01.254876] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:29.465 pt3 00:14:29.465 20:10:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.465 20:10:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:29.465 20:10:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:29.465 20:10:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:14:29.465 20:10:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.465 20:10:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.465 [2024-12-08 20:10:01.264812] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:29.465 [2024-12-08 20:10:01.266632] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:29.465 [2024-12-08 20:10:01.266736] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:29.465 [2024-12-08 20:10:01.266958] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:29.465 [2024-12-08 20:10:01.267019] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:29.465 [2024-12-08 20:10:01.267281] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:29.465 [2024-12-08 20:10:01.272898] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:29.465 [2024-12-08 20:10:01.272964] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:29.465 [2024-12-08 20:10:01.273224] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:29.465 20:10:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.465 20:10:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:29.465 20:10:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:29.465 20:10:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:29.465 20:10:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:29.465 20:10:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:29.465 20:10:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:29.465 20:10:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:29.465 20:10:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:29.465 20:10:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:29.465 20:10:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:29.465 20:10:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:29.465 20:10:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:29.465 20:10:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.465 20:10:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.465 20:10:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.465 20:10:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:29.465 "name": "raid_bdev1", 00:14:29.465 "uuid": "12205ec0-7ce4-4c8a-b922-86aeffad9a0c", 00:14:29.465 "strip_size_kb": 64, 00:14:29.465 "state": "online", 00:14:29.465 "raid_level": "raid5f", 00:14:29.465 "superblock": true, 00:14:29.465 "num_base_bdevs": 3, 00:14:29.465 "num_base_bdevs_discovered": 3, 00:14:29.465 "num_base_bdevs_operational": 3, 00:14:29.465 "base_bdevs_list": [ 00:14:29.465 { 00:14:29.465 "name": "pt1", 00:14:29.465 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:29.465 "is_configured": true, 00:14:29.465 "data_offset": 2048, 00:14:29.465 "data_size": 63488 00:14:29.465 }, 00:14:29.465 { 00:14:29.465 "name": "pt2", 00:14:29.465 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:29.465 "is_configured": true, 00:14:29.465 "data_offset": 2048, 00:14:29.466 "data_size": 63488 00:14:29.466 }, 00:14:29.466 { 00:14:29.466 "name": "pt3", 00:14:29.466 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:29.466 "is_configured": true, 00:14:29.466 "data_offset": 2048, 00:14:29.466 "data_size": 63488 00:14:29.466 } 00:14:29.466 ] 00:14:29.466 }' 00:14:29.466 20:10:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:29.466 20:10:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.723 20:10:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:14:29.723 20:10:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:29.723 20:10:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:29.723 20:10:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:29.723 20:10:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:29.723 20:10:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:29.723 20:10:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:29.723 20:10:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.723 20:10:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:29.723 20:10:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.982 [2024-12-08 20:10:01.703200] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:29.982 20:10:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.982 20:10:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:29.982 "name": "raid_bdev1", 00:14:29.982 "aliases": [ 00:14:29.982 "12205ec0-7ce4-4c8a-b922-86aeffad9a0c" 00:14:29.982 ], 00:14:29.982 "product_name": "Raid Volume", 00:14:29.982 "block_size": 512, 00:14:29.982 "num_blocks": 126976, 00:14:29.982 "uuid": "12205ec0-7ce4-4c8a-b922-86aeffad9a0c", 00:14:29.982 "assigned_rate_limits": { 00:14:29.982 "rw_ios_per_sec": 0, 00:14:29.982 "rw_mbytes_per_sec": 0, 00:14:29.982 "r_mbytes_per_sec": 0, 00:14:29.982 "w_mbytes_per_sec": 0 00:14:29.982 }, 00:14:29.982 "claimed": false, 00:14:29.982 "zoned": false, 00:14:29.982 "supported_io_types": { 00:14:29.982 "read": true, 00:14:29.982 "write": true, 00:14:29.982 "unmap": false, 00:14:29.982 "flush": false, 00:14:29.982 "reset": true, 00:14:29.982 "nvme_admin": false, 00:14:29.982 "nvme_io": false, 00:14:29.982 "nvme_io_md": false, 00:14:29.982 "write_zeroes": true, 00:14:29.982 "zcopy": false, 00:14:29.982 "get_zone_info": false, 00:14:29.982 "zone_management": false, 00:14:29.982 "zone_append": false, 00:14:29.982 "compare": false, 00:14:29.982 "compare_and_write": false, 00:14:29.982 "abort": false, 00:14:29.982 "seek_hole": false, 00:14:29.982 "seek_data": false, 00:14:29.982 "copy": false, 00:14:29.982 "nvme_iov_md": false 00:14:29.982 }, 00:14:29.982 "driver_specific": { 00:14:29.982 "raid": { 00:14:29.982 "uuid": "12205ec0-7ce4-4c8a-b922-86aeffad9a0c", 00:14:29.982 "strip_size_kb": 64, 00:14:29.982 "state": "online", 00:14:29.982 "raid_level": "raid5f", 00:14:29.983 "superblock": true, 00:14:29.983 "num_base_bdevs": 3, 00:14:29.983 "num_base_bdevs_discovered": 3, 00:14:29.983 "num_base_bdevs_operational": 3, 00:14:29.983 "base_bdevs_list": [ 00:14:29.983 { 00:14:29.983 "name": "pt1", 00:14:29.983 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:29.983 "is_configured": true, 00:14:29.983 "data_offset": 2048, 00:14:29.983 "data_size": 63488 00:14:29.983 }, 00:14:29.983 { 00:14:29.983 "name": "pt2", 00:14:29.983 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:29.983 "is_configured": true, 00:14:29.983 "data_offset": 2048, 00:14:29.983 "data_size": 63488 00:14:29.983 }, 00:14:29.983 { 00:14:29.983 "name": "pt3", 00:14:29.983 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:29.983 "is_configured": true, 00:14:29.983 "data_offset": 2048, 00:14:29.983 "data_size": 63488 00:14:29.983 } 00:14:29.983 ] 00:14:29.983 } 00:14:29.983 } 00:14:29.983 }' 00:14:29.983 20:10:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:29.983 20:10:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:29.983 pt2 00:14:29.983 pt3' 00:14:29.983 20:10:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:29.983 20:10:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:29.983 20:10:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:29.983 20:10:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:29.983 20:10:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.983 20:10:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.983 20:10:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:29.983 20:10:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.983 20:10:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:29.983 20:10:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:29.983 20:10:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:29.983 20:10:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:29.983 20:10:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.983 20:10:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.983 20:10:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:29.983 20:10:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.983 20:10:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:29.983 20:10:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:29.983 20:10:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:29.983 20:10:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:29.983 20:10:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:14:29.983 20:10:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.983 20:10:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.983 20:10:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.242 20:10:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:30.242 20:10:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:30.242 20:10:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:14:30.242 20:10:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:30.242 20:10:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.242 20:10:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.242 [2024-12-08 20:10:01.974686] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:30.242 20:10:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.242 20:10:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=12205ec0-7ce4-4c8a-b922-86aeffad9a0c 00:14:30.242 20:10:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 12205ec0-7ce4-4c8a-b922-86aeffad9a0c ']' 00:14:30.242 20:10:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:30.242 20:10:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.242 20:10:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.242 [2024-12-08 20:10:02.006463] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:30.242 [2024-12-08 20:10:02.006487] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:30.242 [2024-12-08 20:10:02.006551] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:30.242 [2024-12-08 20:10:02.006622] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:30.242 [2024-12-08 20:10:02.006631] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:30.242 20:10:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.242 20:10:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:14:30.242 20:10:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.242 20:10:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.242 20:10:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.243 20:10:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.243 20:10:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:14:30.243 20:10:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:14:30.243 20:10:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:30.243 20:10:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:14:30.243 20:10:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.243 20:10:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.243 20:10:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.243 20:10:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:30.243 20:10:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:14:30.243 20:10:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.243 20:10:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.243 20:10:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.243 20:10:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:30.243 20:10:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:14:30.243 20:10:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.243 20:10:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.243 20:10:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.243 20:10:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:14:30.243 20:10:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.243 20:10:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.243 20:10:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:14:30.243 20:10:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.243 20:10:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:14:30.243 20:10:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:14:30.243 20:10:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:14:30.243 20:10:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:14:30.243 20:10:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:14:30.243 20:10:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:30.243 20:10:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:14:30.243 20:10:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:30.243 20:10:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:14:30.243 20:10:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.243 20:10:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.243 [2024-12-08 20:10:02.138294] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:14:30.243 [2024-12-08 20:10:02.140126] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:14:30.243 [2024-12-08 20:10:02.140179] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:14:30.243 [2024-12-08 20:10:02.140228] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:14:30.243 [2024-12-08 20:10:02.140275] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:14:30.243 [2024-12-08 20:10:02.140293] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:14:30.243 [2024-12-08 20:10:02.140309] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:30.243 [2024-12-08 20:10:02.140317] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:14:30.243 request: 00:14:30.243 { 00:14:30.243 "name": "raid_bdev1", 00:14:30.243 "raid_level": "raid5f", 00:14:30.243 "base_bdevs": [ 00:14:30.243 "malloc1", 00:14:30.243 "malloc2", 00:14:30.243 "malloc3" 00:14:30.243 ], 00:14:30.243 "strip_size_kb": 64, 00:14:30.243 "superblock": false, 00:14:30.243 "method": "bdev_raid_create", 00:14:30.243 "req_id": 1 00:14:30.243 } 00:14:30.243 Got JSON-RPC error response 00:14:30.243 response: 00:14:30.243 { 00:14:30.243 "code": -17, 00:14:30.243 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:14:30.243 } 00:14:30.243 20:10:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:14:30.243 20:10:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:14:30.243 20:10:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:30.243 20:10:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:30.243 20:10:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:30.243 20:10:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.243 20:10:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:14:30.243 20:10:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.243 20:10:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.243 20:10:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.243 20:10:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:14:30.243 20:10:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:14:30.243 20:10:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:30.243 20:10:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.243 20:10:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.243 [2024-12-08 20:10:02.206124] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:30.243 [2024-12-08 20:10:02.206202] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:30.243 [2024-12-08 20:10:02.206236] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:30.243 [2024-12-08 20:10:02.206262] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:30.243 [2024-12-08 20:10:02.208336] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:30.243 [2024-12-08 20:10:02.208403] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:30.243 [2024-12-08 20:10:02.208495] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:30.243 [2024-12-08 20:10:02.208575] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:30.243 pt1 00:14:30.243 20:10:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.243 20:10:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:14:30.243 20:10:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:30.243 20:10:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:30.243 20:10:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:30.243 20:10:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:30.243 20:10:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:30.243 20:10:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:30.243 20:10:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:30.243 20:10:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:30.243 20:10:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:30.503 20:10:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:30.503 20:10:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.503 20:10:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.503 20:10:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.503 20:10:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.503 20:10:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:30.503 "name": "raid_bdev1", 00:14:30.503 "uuid": "12205ec0-7ce4-4c8a-b922-86aeffad9a0c", 00:14:30.503 "strip_size_kb": 64, 00:14:30.503 "state": "configuring", 00:14:30.503 "raid_level": "raid5f", 00:14:30.503 "superblock": true, 00:14:30.503 "num_base_bdevs": 3, 00:14:30.503 "num_base_bdevs_discovered": 1, 00:14:30.503 "num_base_bdevs_operational": 3, 00:14:30.503 "base_bdevs_list": [ 00:14:30.503 { 00:14:30.503 "name": "pt1", 00:14:30.503 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:30.503 "is_configured": true, 00:14:30.503 "data_offset": 2048, 00:14:30.503 "data_size": 63488 00:14:30.503 }, 00:14:30.503 { 00:14:30.503 "name": null, 00:14:30.503 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:30.503 "is_configured": false, 00:14:30.503 "data_offset": 2048, 00:14:30.503 "data_size": 63488 00:14:30.503 }, 00:14:30.503 { 00:14:30.503 "name": null, 00:14:30.503 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:30.503 "is_configured": false, 00:14:30.503 "data_offset": 2048, 00:14:30.503 "data_size": 63488 00:14:30.503 } 00:14:30.503 ] 00:14:30.503 }' 00:14:30.503 20:10:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:30.503 20:10:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.762 20:10:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:14:30.762 20:10:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:30.762 20:10:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.762 20:10:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.762 [2024-12-08 20:10:02.597479] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:30.762 [2024-12-08 20:10:02.597587] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:30.762 [2024-12-08 20:10:02.597628] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:14:30.762 [2024-12-08 20:10:02.597656] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:30.762 [2024-12-08 20:10:02.598149] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:30.762 [2024-12-08 20:10:02.598178] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:30.762 [2024-12-08 20:10:02.598262] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:30.762 [2024-12-08 20:10:02.598289] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:30.762 pt2 00:14:30.762 20:10:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.762 20:10:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:14:30.762 20:10:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.762 20:10:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.762 [2024-12-08 20:10:02.609462] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:14:30.762 20:10:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.762 20:10:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:14:30.762 20:10:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:30.762 20:10:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:30.762 20:10:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:30.762 20:10:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:30.762 20:10:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:30.762 20:10:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:30.762 20:10:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:30.762 20:10:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:30.762 20:10:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:30.762 20:10:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:30.762 20:10:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.762 20:10:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.762 20:10:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.762 20:10:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.762 20:10:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:30.762 "name": "raid_bdev1", 00:14:30.762 "uuid": "12205ec0-7ce4-4c8a-b922-86aeffad9a0c", 00:14:30.762 "strip_size_kb": 64, 00:14:30.762 "state": "configuring", 00:14:30.762 "raid_level": "raid5f", 00:14:30.762 "superblock": true, 00:14:30.762 "num_base_bdevs": 3, 00:14:30.762 "num_base_bdevs_discovered": 1, 00:14:30.762 "num_base_bdevs_operational": 3, 00:14:30.762 "base_bdevs_list": [ 00:14:30.762 { 00:14:30.762 "name": "pt1", 00:14:30.762 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:30.762 "is_configured": true, 00:14:30.762 "data_offset": 2048, 00:14:30.762 "data_size": 63488 00:14:30.762 }, 00:14:30.762 { 00:14:30.762 "name": null, 00:14:30.762 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:30.762 "is_configured": false, 00:14:30.762 "data_offset": 0, 00:14:30.762 "data_size": 63488 00:14:30.762 }, 00:14:30.762 { 00:14:30.762 "name": null, 00:14:30.762 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:30.762 "is_configured": false, 00:14:30.762 "data_offset": 2048, 00:14:30.762 "data_size": 63488 00:14:30.762 } 00:14:30.762 ] 00:14:30.762 }' 00:14:30.762 20:10:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:30.762 20:10:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.331 20:10:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:14:31.331 20:10:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:31.331 20:10:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:31.331 20:10:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.331 20:10:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.331 [2024-12-08 20:10:03.020739] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:31.331 [2024-12-08 20:10:03.020833] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:31.331 [2024-12-08 20:10:03.020866] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:14:31.331 [2024-12-08 20:10:03.020895] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:31.331 [2024-12-08 20:10:03.021386] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:31.331 [2024-12-08 20:10:03.021445] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:31.331 [2024-12-08 20:10:03.021564] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:31.331 [2024-12-08 20:10:03.021615] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:31.331 pt2 00:14:31.331 20:10:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.331 20:10:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:31.331 20:10:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:31.331 20:10:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:31.331 20:10:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.331 20:10:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.331 [2024-12-08 20:10:03.032713] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:31.331 [2024-12-08 20:10:03.032788] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:31.331 [2024-12-08 20:10:03.032816] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:14:31.331 [2024-12-08 20:10:03.032840] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:31.331 [2024-12-08 20:10:03.033239] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:31.331 [2024-12-08 20:10:03.033296] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:31.331 [2024-12-08 20:10:03.033396] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:31.331 [2024-12-08 20:10:03.033444] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:31.331 [2024-12-08 20:10:03.033610] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:31.331 [2024-12-08 20:10:03.033654] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:31.331 [2024-12-08 20:10:03.033924] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:14:31.331 [2024-12-08 20:10:03.039206] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:31.331 [2024-12-08 20:10:03.039257] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:14:31.331 [2024-12-08 20:10:03.039482] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:31.331 pt3 00:14:31.331 20:10:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.331 20:10:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:31.331 20:10:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:31.331 20:10:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:31.331 20:10:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:31.331 20:10:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:31.331 20:10:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:31.331 20:10:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:31.331 20:10:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:31.331 20:10:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:31.331 20:10:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:31.331 20:10:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:31.331 20:10:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:31.331 20:10:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:31.331 20:10:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.331 20:10:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.331 20:10:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.331 20:10:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.331 20:10:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:31.331 "name": "raid_bdev1", 00:14:31.331 "uuid": "12205ec0-7ce4-4c8a-b922-86aeffad9a0c", 00:14:31.331 "strip_size_kb": 64, 00:14:31.331 "state": "online", 00:14:31.331 "raid_level": "raid5f", 00:14:31.331 "superblock": true, 00:14:31.331 "num_base_bdevs": 3, 00:14:31.331 "num_base_bdevs_discovered": 3, 00:14:31.331 "num_base_bdevs_operational": 3, 00:14:31.331 "base_bdevs_list": [ 00:14:31.331 { 00:14:31.331 "name": "pt1", 00:14:31.331 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:31.331 "is_configured": true, 00:14:31.331 "data_offset": 2048, 00:14:31.331 "data_size": 63488 00:14:31.331 }, 00:14:31.331 { 00:14:31.332 "name": "pt2", 00:14:31.332 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:31.332 "is_configured": true, 00:14:31.332 "data_offset": 2048, 00:14:31.332 "data_size": 63488 00:14:31.332 }, 00:14:31.332 { 00:14:31.332 "name": "pt3", 00:14:31.332 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:31.332 "is_configured": true, 00:14:31.332 "data_offset": 2048, 00:14:31.332 "data_size": 63488 00:14:31.332 } 00:14:31.332 ] 00:14:31.332 }' 00:14:31.332 20:10:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:31.332 20:10:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.592 20:10:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:14:31.592 20:10:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:31.592 20:10:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:31.592 20:10:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:31.592 20:10:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:31.592 20:10:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:31.592 20:10:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:31.592 20:10:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.592 20:10:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:31.592 20:10:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.592 [2024-12-08 20:10:03.513230] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:31.592 20:10:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.592 20:10:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:31.592 "name": "raid_bdev1", 00:14:31.592 "aliases": [ 00:14:31.592 "12205ec0-7ce4-4c8a-b922-86aeffad9a0c" 00:14:31.592 ], 00:14:31.592 "product_name": "Raid Volume", 00:14:31.592 "block_size": 512, 00:14:31.592 "num_blocks": 126976, 00:14:31.592 "uuid": "12205ec0-7ce4-4c8a-b922-86aeffad9a0c", 00:14:31.592 "assigned_rate_limits": { 00:14:31.592 "rw_ios_per_sec": 0, 00:14:31.592 "rw_mbytes_per_sec": 0, 00:14:31.592 "r_mbytes_per_sec": 0, 00:14:31.592 "w_mbytes_per_sec": 0 00:14:31.592 }, 00:14:31.592 "claimed": false, 00:14:31.592 "zoned": false, 00:14:31.592 "supported_io_types": { 00:14:31.592 "read": true, 00:14:31.592 "write": true, 00:14:31.592 "unmap": false, 00:14:31.592 "flush": false, 00:14:31.592 "reset": true, 00:14:31.592 "nvme_admin": false, 00:14:31.592 "nvme_io": false, 00:14:31.592 "nvme_io_md": false, 00:14:31.592 "write_zeroes": true, 00:14:31.592 "zcopy": false, 00:14:31.592 "get_zone_info": false, 00:14:31.592 "zone_management": false, 00:14:31.592 "zone_append": false, 00:14:31.592 "compare": false, 00:14:31.592 "compare_and_write": false, 00:14:31.592 "abort": false, 00:14:31.592 "seek_hole": false, 00:14:31.592 "seek_data": false, 00:14:31.592 "copy": false, 00:14:31.592 "nvme_iov_md": false 00:14:31.592 }, 00:14:31.592 "driver_specific": { 00:14:31.592 "raid": { 00:14:31.592 "uuid": "12205ec0-7ce4-4c8a-b922-86aeffad9a0c", 00:14:31.592 "strip_size_kb": 64, 00:14:31.592 "state": "online", 00:14:31.592 "raid_level": "raid5f", 00:14:31.592 "superblock": true, 00:14:31.592 "num_base_bdevs": 3, 00:14:31.592 "num_base_bdevs_discovered": 3, 00:14:31.592 "num_base_bdevs_operational": 3, 00:14:31.592 "base_bdevs_list": [ 00:14:31.592 { 00:14:31.592 "name": "pt1", 00:14:31.592 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:31.592 "is_configured": true, 00:14:31.592 "data_offset": 2048, 00:14:31.592 "data_size": 63488 00:14:31.592 }, 00:14:31.592 { 00:14:31.592 "name": "pt2", 00:14:31.592 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:31.592 "is_configured": true, 00:14:31.592 "data_offset": 2048, 00:14:31.592 "data_size": 63488 00:14:31.592 }, 00:14:31.592 { 00:14:31.592 "name": "pt3", 00:14:31.592 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:31.592 "is_configured": true, 00:14:31.592 "data_offset": 2048, 00:14:31.592 "data_size": 63488 00:14:31.592 } 00:14:31.592 ] 00:14:31.592 } 00:14:31.592 } 00:14:31.592 }' 00:14:31.592 20:10:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:31.852 20:10:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:31.852 pt2 00:14:31.852 pt3' 00:14:31.852 20:10:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:31.852 20:10:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:31.852 20:10:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:31.852 20:10:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:31.852 20:10:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.852 20:10:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.852 20:10:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:31.852 20:10:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.852 20:10:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:31.852 20:10:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:31.852 20:10:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:31.852 20:10:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:31.852 20:10:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:31.852 20:10:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.852 20:10:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.852 20:10:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.852 20:10:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:31.852 20:10:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:31.852 20:10:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:31.852 20:10:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:31.852 20:10:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:14:31.852 20:10:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.852 20:10:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.852 20:10:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.852 20:10:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:31.852 20:10:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:31.852 20:10:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:31.852 20:10:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:14:31.852 20:10:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.852 20:10:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.852 [2024-12-08 20:10:03.760704] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:31.852 20:10:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.852 20:10:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 12205ec0-7ce4-4c8a-b922-86aeffad9a0c '!=' 12205ec0-7ce4-4c8a-b922-86aeffad9a0c ']' 00:14:31.852 20:10:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:14:31.852 20:10:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:31.852 20:10:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:14:31.852 20:10:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:14:31.852 20:10:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.852 20:10:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.852 [2024-12-08 20:10:03.804495] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:14:31.852 20:10:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.852 20:10:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:31.852 20:10:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:31.852 20:10:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:31.852 20:10:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:31.852 20:10:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:31.852 20:10:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:31.852 20:10:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:31.852 20:10:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:31.852 20:10:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:31.852 20:10:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:31.852 20:10:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.852 20:10:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:31.852 20:10:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.852 20:10:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.111 20:10:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.111 20:10:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:32.111 "name": "raid_bdev1", 00:14:32.111 "uuid": "12205ec0-7ce4-4c8a-b922-86aeffad9a0c", 00:14:32.111 "strip_size_kb": 64, 00:14:32.111 "state": "online", 00:14:32.111 "raid_level": "raid5f", 00:14:32.111 "superblock": true, 00:14:32.111 "num_base_bdevs": 3, 00:14:32.111 "num_base_bdevs_discovered": 2, 00:14:32.111 "num_base_bdevs_operational": 2, 00:14:32.112 "base_bdevs_list": [ 00:14:32.112 { 00:14:32.112 "name": null, 00:14:32.112 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:32.112 "is_configured": false, 00:14:32.112 "data_offset": 0, 00:14:32.112 "data_size": 63488 00:14:32.112 }, 00:14:32.112 { 00:14:32.112 "name": "pt2", 00:14:32.112 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:32.112 "is_configured": true, 00:14:32.112 "data_offset": 2048, 00:14:32.112 "data_size": 63488 00:14:32.112 }, 00:14:32.112 { 00:14:32.112 "name": "pt3", 00:14:32.112 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:32.112 "is_configured": true, 00:14:32.112 "data_offset": 2048, 00:14:32.112 "data_size": 63488 00:14:32.112 } 00:14:32.112 ] 00:14:32.112 }' 00:14:32.112 20:10:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:32.112 20:10:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.371 20:10:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:32.371 20:10:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.371 20:10:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.371 [2024-12-08 20:10:04.267692] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:32.371 [2024-12-08 20:10:04.267753] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:32.371 [2024-12-08 20:10:04.267839] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:32.371 [2024-12-08 20:10:04.267938] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:32.371 [2024-12-08 20:10:04.267996] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:14:32.371 20:10:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.371 20:10:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:14:32.371 20:10:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:32.371 20:10:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.371 20:10:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.371 20:10:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.371 20:10:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:14:32.371 20:10:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:14:32.371 20:10:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:14:32.371 20:10:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:32.371 20:10:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:14:32.371 20:10:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.371 20:10:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.371 20:10:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.371 20:10:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:14:32.371 20:10:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:32.371 20:10:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:14:32.371 20:10:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.371 20:10:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.371 20:10:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.371 20:10:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:14:32.371 20:10:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:32.371 20:10:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:14:32.371 20:10:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:14:32.371 20:10:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:32.371 20:10:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.371 20:10:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.371 [2024-12-08 20:10:04.339546] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:32.371 [2024-12-08 20:10:04.339592] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:32.371 [2024-12-08 20:10:04.339607] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:14:32.371 [2024-12-08 20:10:04.339617] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:32.371 [2024-12-08 20:10:04.341892] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:32.371 [2024-12-08 20:10:04.341929] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:32.371 [2024-12-08 20:10:04.342037] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:32.371 [2024-12-08 20:10:04.342083] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:32.371 pt2 00:14:32.371 20:10:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.371 20:10:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:14:32.371 20:10:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:32.371 20:10:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:32.371 20:10:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:32.371 20:10:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:32.371 20:10:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:32.371 20:10:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:32.371 20:10:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:32.371 20:10:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:32.630 20:10:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:32.630 20:10:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:32.630 20:10:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.630 20:10:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.630 20:10:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:32.630 20:10:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.630 20:10:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:32.630 "name": "raid_bdev1", 00:14:32.630 "uuid": "12205ec0-7ce4-4c8a-b922-86aeffad9a0c", 00:14:32.630 "strip_size_kb": 64, 00:14:32.630 "state": "configuring", 00:14:32.630 "raid_level": "raid5f", 00:14:32.630 "superblock": true, 00:14:32.631 "num_base_bdevs": 3, 00:14:32.631 "num_base_bdevs_discovered": 1, 00:14:32.631 "num_base_bdevs_operational": 2, 00:14:32.631 "base_bdevs_list": [ 00:14:32.631 { 00:14:32.631 "name": null, 00:14:32.631 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:32.631 "is_configured": false, 00:14:32.631 "data_offset": 2048, 00:14:32.631 "data_size": 63488 00:14:32.631 }, 00:14:32.631 { 00:14:32.631 "name": "pt2", 00:14:32.631 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:32.631 "is_configured": true, 00:14:32.631 "data_offset": 2048, 00:14:32.631 "data_size": 63488 00:14:32.631 }, 00:14:32.631 { 00:14:32.631 "name": null, 00:14:32.631 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:32.631 "is_configured": false, 00:14:32.631 "data_offset": 2048, 00:14:32.631 "data_size": 63488 00:14:32.631 } 00:14:32.631 ] 00:14:32.631 }' 00:14:32.631 20:10:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:32.631 20:10:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.889 20:10:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:14:32.889 20:10:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:14:32.889 20:10:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:14:32.889 20:10:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:32.889 20:10:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.889 20:10:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.889 [2024-12-08 20:10:04.746951] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:32.889 [2024-12-08 20:10:04.747093] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:32.889 [2024-12-08 20:10:04.747133] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:14:32.889 [2024-12-08 20:10:04.747195] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:32.889 [2024-12-08 20:10:04.747696] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:32.889 [2024-12-08 20:10:04.747752] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:32.889 [2024-12-08 20:10:04.747877] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:32.889 [2024-12-08 20:10:04.747933] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:32.889 [2024-12-08 20:10:04.748101] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:14:32.889 [2024-12-08 20:10:04.748142] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:32.890 [2024-12-08 20:10:04.748425] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:32.890 [2024-12-08 20:10:04.753566] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:14:32.890 [2024-12-08 20:10:04.753617] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:14:32.890 [2024-12-08 20:10:04.753987] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:32.890 pt3 00:14:32.890 20:10:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.890 20:10:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:32.890 20:10:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:32.890 20:10:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:32.890 20:10:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:32.890 20:10:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:32.890 20:10:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:32.890 20:10:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:32.890 20:10:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:32.890 20:10:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:32.890 20:10:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:32.890 20:10:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:32.890 20:10:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.890 20:10:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.890 20:10:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:32.890 20:10:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.890 20:10:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:32.890 "name": "raid_bdev1", 00:14:32.890 "uuid": "12205ec0-7ce4-4c8a-b922-86aeffad9a0c", 00:14:32.890 "strip_size_kb": 64, 00:14:32.890 "state": "online", 00:14:32.890 "raid_level": "raid5f", 00:14:32.890 "superblock": true, 00:14:32.890 "num_base_bdevs": 3, 00:14:32.890 "num_base_bdevs_discovered": 2, 00:14:32.890 "num_base_bdevs_operational": 2, 00:14:32.890 "base_bdevs_list": [ 00:14:32.890 { 00:14:32.890 "name": null, 00:14:32.890 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:32.890 "is_configured": false, 00:14:32.890 "data_offset": 2048, 00:14:32.890 "data_size": 63488 00:14:32.890 }, 00:14:32.890 { 00:14:32.890 "name": "pt2", 00:14:32.890 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:32.890 "is_configured": true, 00:14:32.890 "data_offset": 2048, 00:14:32.890 "data_size": 63488 00:14:32.890 }, 00:14:32.890 { 00:14:32.890 "name": "pt3", 00:14:32.890 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:32.890 "is_configured": true, 00:14:32.890 "data_offset": 2048, 00:14:32.890 "data_size": 63488 00:14:32.890 } 00:14:32.890 ] 00:14:32.890 }' 00:14:32.890 20:10:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:32.890 20:10:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.456 20:10:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:33.456 20:10:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.456 20:10:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.456 [2024-12-08 20:10:05.187708] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:33.456 [2024-12-08 20:10:05.187736] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:33.456 [2024-12-08 20:10:05.187802] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:33.456 [2024-12-08 20:10:05.187863] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:33.456 [2024-12-08 20:10:05.187872] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:14:33.456 20:10:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.456 20:10:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:14:33.456 20:10:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:33.456 20:10:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.456 20:10:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.456 20:10:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.456 20:10:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:14:33.456 20:10:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:14:33.456 20:10:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:14:33.456 20:10:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:14:33.457 20:10:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:14:33.457 20:10:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.457 20:10:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.457 20:10:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.457 20:10:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:33.457 20:10:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.457 20:10:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.457 [2024-12-08 20:10:05.243632] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:33.457 [2024-12-08 20:10:05.243685] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:33.457 [2024-12-08 20:10:05.243703] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:14:33.457 [2024-12-08 20:10:05.243712] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:33.457 [2024-12-08 20:10:05.245896] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:33.457 [2024-12-08 20:10:05.245930] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:33.457 [2024-12-08 20:10:05.246008] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:33.457 [2024-12-08 20:10:05.246051] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:33.457 [2024-12-08 20:10:05.246208] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:14:33.457 [2024-12-08 20:10:05.246220] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:33.457 [2024-12-08 20:10:05.246235] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:14:33.457 [2024-12-08 20:10:05.246280] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:33.457 pt1 00:14:33.457 20:10:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.457 20:10:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:14:33.457 20:10:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:14:33.457 20:10:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:33.457 20:10:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:33.457 20:10:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:33.457 20:10:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:33.457 20:10:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:33.457 20:10:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:33.457 20:10:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:33.457 20:10:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:33.457 20:10:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:33.457 20:10:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:33.457 20:10:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.457 20:10:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.457 20:10:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:33.457 20:10:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.457 20:10:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:33.457 "name": "raid_bdev1", 00:14:33.457 "uuid": "12205ec0-7ce4-4c8a-b922-86aeffad9a0c", 00:14:33.457 "strip_size_kb": 64, 00:14:33.457 "state": "configuring", 00:14:33.457 "raid_level": "raid5f", 00:14:33.457 "superblock": true, 00:14:33.457 "num_base_bdevs": 3, 00:14:33.457 "num_base_bdevs_discovered": 1, 00:14:33.457 "num_base_bdevs_operational": 2, 00:14:33.457 "base_bdevs_list": [ 00:14:33.457 { 00:14:33.457 "name": null, 00:14:33.457 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:33.457 "is_configured": false, 00:14:33.457 "data_offset": 2048, 00:14:33.457 "data_size": 63488 00:14:33.457 }, 00:14:33.457 { 00:14:33.457 "name": "pt2", 00:14:33.457 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:33.457 "is_configured": true, 00:14:33.457 "data_offset": 2048, 00:14:33.457 "data_size": 63488 00:14:33.457 }, 00:14:33.457 { 00:14:33.457 "name": null, 00:14:33.457 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:33.457 "is_configured": false, 00:14:33.457 "data_offset": 2048, 00:14:33.457 "data_size": 63488 00:14:33.457 } 00:14:33.457 ] 00:14:33.457 }' 00:14:33.457 20:10:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:33.457 20:10:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.715 20:10:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:14:33.715 20:10:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:14:33.715 20:10:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.715 20:10:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.975 20:10:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.975 20:10:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:14:33.975 20:10:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:33.975 20:10:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.975 20:10:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.975 [2024-12-08 20:10:05.706952] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:33.975 [2024-12-08 20:10:05.707067] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:33.975 [2024-12-08 20:10:05.707107] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:14:33.975 [2024-12-08 20:10:05.707136] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:33.975 [2024-12-08 20:10:05.707702] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:33.975 [2024-12-08 20:10:05.707771] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:33.975 [2024-12-08 20:10:05.707918] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:33.975 [2024-12-08 20:10:05.707990] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:33.975 [2024-12-08 20:10:05.708177] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:14:33.975 [2024-12-08 20:10:05.708222] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:33.975 [2024-12-08 20:10:05.708537] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:14:33.975 [2024-12-08 20:10:05.714439] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:14:33.975 [2024-12-08 20:10:05.714496] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:14:33.975 pt3 00:14:33.975 [2024-12-08 20:10:05.714793] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:33.975 20:10:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.975 20:10:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:33.975 20:10:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:33.975 20:10:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:33.975 20:10:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:33.975 20:10:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:33.975 20:10:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:33.975 20:10:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:33.975 20:10:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:33.975 20:10:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:33.975 20:10:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:33.975 20:10:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:33.975 20:10:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:33.975 20:10:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.975 20:10:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.975 20:10:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.975 20:10:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:33.975 "name": "raid_bdev1", 00:14:33.975 "uuid": "12205ec0-7ce4-4c8a-b922-86aeffad9a0c", 00:14:33.975 "strip_size_kb": 64, 00:14:33.975 "state": "online", 00:14:33.975 "raid_level": "raid5f", 00:14:33.975 "superblock": true, 00:14:33.975 "num_base_bdevs": 3, 00:14:33.975 "num_base_bdevs_discovered": 2, 00:14:33.975 "num_base_bdevs_operational": 2, 00:14:33.975 "base_bdevs_list": [ 00:14:33.975 { 00:14:33.975 "name": null, 00:14:33.975 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:33.975 "is_configured": false, 00:14:33.975 "data_offset": 2048, 00:14:33.975 "data_size": 63488 00:14:33.975 }, 00:14:33.975 { 00:14:33.975 "name": "pt2", 00:14:33.975 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:33.975 "is_configured": true, 00:14:33.975 "data_offset": 2048, 00:14:33.975 "data_size": 63488 00:14:33.975 }, 00:14:33.975 { 00:14:33.975 "name": "pt3", 00:14:33.975 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:33.975 "is_configured": true, 00:14:33.975 "data_offset": 2048, 00:14:33.975 "data_size": 63488 00:14:33.975 } 00:14:33.975 ] 00:14:33.975 }' 00:14:33.975 20:10:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:33.975 20:10:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.235 20:10:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:14:34.235 20:10:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:14:34.235 20:10:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.235 20:10:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.235 20:10:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.235 20:10:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:14:34.235 20:10:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:14:34.235 20:10:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:34.235 20:10:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.235 20:10:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.235 [2024-12-08 20:10:06.193079] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:34.496 20:10:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.496 20:10:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 12205ec0-7ce4-4c8a-b922-86aeffad9a0c '!=' 12205ec0-7ce4-4c8a-b922-86aeffad9a0c ']' 00:14:34.496 20:10:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 80828 00:14:34.496 20:10:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 80828 ']' 00:14:34.496 20:10:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # kill -0 80828 00:14:34.496 20:10:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # uname 00:14:34.496 20:10:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:34.496 20:10:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80828 00:14:34.496 killing process with pid 80828 00:14:34.496 20:10:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:34.496 20:10:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:34.496 20:10:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80828' 00:14:34.496 20:10:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@973 -- # kill 80828 00:14:34.496 [2024-12-08 20:10:06.246454] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:34.496 [2024-12-08 20:10:06.246537] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:34.496 [2024-12-08 20:10:06.246598] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:34.496 [2024-12-08 20:10:06.246610] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:14:34.496 20:10:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@978 -- # wait 80828 00:14:34.756 [2024-12-08 20:10:06.528912] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:35.697 20:10:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:14:35.697 00:14:35.697 real 0m7.438s 00:14:35.697 user 0m11.613s 00:14:35.697 sys 0m1.249s 00:14:35.697 ************************************ 00:14:35.697 END TEST raid5f_superblock_test 00:14:35.697 ************************************ 00:14:35.697 20:10:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:35.697 20:10:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.697 20:10:07 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:14:35.697 20:10:07 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false true 00:14:35.697 20:10:07 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:14:35.697 20:10:07 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:35.697 20:10:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:35.697 ************************************ 00:14:35.697 START TEST raid5f_rebuild_test 00:14:35.697 ************************************ 00:14:35.697 20:10:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 3 false false true 00:14:35.697 20:10:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:14:35.697 20:10:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:14:35.697 20:10:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:14:35.697 20:10:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:14:35.697 20:10:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:35.958 20:10:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:35.958 20:10:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:35.958 20:10:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:35.958 20:10:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:35.958 20:10:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:35.958 20:10:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:35.958 20:10:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:35.958 20:10:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:35.958 20:10:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:14:35.958 20:10:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:35.958 20:10:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:35.958 20:10:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:35.958 20:10:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:35.958 20:10:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:35.958 20:10:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:35.958 20:10:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:35.958 20:10:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:35.958 20:10:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:35.958 20:10:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:14:35.958 20:10:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:14:35.958 20:10:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:14:35.958 20:10:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:14:35.958 20:10:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:14:35.958 20:10:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=81263 00:14:35.958 20:10:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:35.958 20:10:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 81263 00:14:35.958 20:10:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 81263 ']' 00:14:35.958 20:10:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:35.958 20:10:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:35.958 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:35.958 20:10:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:35.958 20:10:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:35.958 20:10:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.958 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:35.958 Zero copy mechanism will not be used. 00:14:35.958 [2024-12-08 20:10:07.773058] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:14:35.958 [2024-12-08 20:10:07.773162] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81263 ] 00:14:36.218 [2024-12-08 20:10:07.946409] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:36.218 [2024-12-08 20:10:08.053217] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:36.478 [2024-12-08 20:10:08.241263] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:36.478 [2024-12-08 20:10:08.241320] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:36.739 20:10:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:36.739 20:10:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:14:36.739 20:10:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:36.739 20:10:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:36.739 20:10:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.739 20:10:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.739 BaseBdev1_malloc 00:14:36.739 20:10:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.739 20:10:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:36.739 20:10:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.739 20:10:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.739 [2024-12-08 20:10:08.636397] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:36.739 [2024-12-08 20:10:08.636456] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:36.739 [2024-12-08 20:10:08.636476] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:36.739 [2024-12-08 20:10:08.636487] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:36.739 [2024-12-08 20:10:08.638512] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:36.739 [2024-12-08 20:10:08.638552] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:36.739 BaseBdev1 00:14:36.739 20:10:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.739 20:10:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:36.739 20:10:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:36.739 20:10:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.739 20:10:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.739 BaseBdev2_malloc 00:14:36.739 20:10:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.739 20:10:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:36.739 20:10:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.739 20:10:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.739 [2024-12-08 20:10:08.687919] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:36.739 [2024-12-08 20:10:08.688025] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:36.739 [2024-12-08 20:10:08.688050] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:36.739 [2024-12-08 20:10:08.688061] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:36.739 [2024-12-08 20:10:08.690018] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:36.739 [2024-12-08 20:10:08.690064] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:36.739 BaseBdev2 00:14:36.739 20:10:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.739 20:10:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:36.739 20:10:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:36.739 20:10:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.739 20:10:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.000 BaseBdev3_malloc 00:14:37.000 20:10:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.000 20:10:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:14:37.000 20:10:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.000 20:10:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.000 [2024-12-08 20:10:08.771299] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:14:37.000 [2024-12-08 20:10:08.771426] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:37.000 [2024-12-08 20:10:08.771451] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:37.000 [2024-12-08 20:10:08.771462] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:37.000 [2024-12-08 20:10:08.773444] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:37.000 [2024-12-08 20:10:08.773492] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:37.000 BaseBdev3 00:14:37.000 20:10:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.000 20:10:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:37.000 20:10:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.000 20:10:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.000 spare_malloc 00:14:37.000 20:10:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.000 20:10:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:37.000 20:10:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.000 20:10:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.000 spare_delay 00:14:37.000 20:10:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.000 20:10:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:37.000 20:10:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.000 20:10:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.000 [2024-12-08 20:10:08.835795] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:37.000 [2024-12-08 20:10:08.835845] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:37.000 [2024-12-08 20:10:08.835861] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:14:37.000 [2024-12-08 20:10:08.835871] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:37.000 [2024-12-08 20:10:08.837945] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:37.000 [2024-12-08 20:10:08.837993] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:37.000 spare 00:14:37.000 20:10:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.000 20:10:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:14:37.000 20:10:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.000 20:10:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.000 [2024-12-08 20:10:08.847838] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:37.000 [2024-12-08 20:10:08.849643] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:37.000 [2024-12-08 20:10:08.849704] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:37.000 [2024-12-08 20:10:08.849784] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:37.000 [2024-12-08 20:10:08.849794] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:14:37.000 [2024-12-08 20:10:08.850038] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:14:37.000 [2024-12-08 20:10:08.855425] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:37.000 [2024-12-08 20:10:08.855482] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:37.000 [2024-12-08 20:10:08.855691] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:37.000 20:10:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.000 20:10:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:37.000 20:10:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:37.000 20:10:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:37.000 20:10:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:37.000 20:10:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:37.000 20:10:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:37.000 20:10:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:37.000 20:10:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:37.000 20:10:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:37.000 20:10:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:37.000 20:10:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.000 20:10:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:37.000 20:10:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.000 20:10:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.000 20:10:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.001 20:10:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:37.001 "name": "raid_bdev1", 00:14:37.001 "uuid": "94973591-e7ac-4f37-8d82-cc22e6767392", 00:14:37.001 "strip_size_kb": 64, 00:14:37.001 "state": "online", 00:14:37.001 "raid_level": "raid5f", 00:14:37.001 "superblock": false, 00:14:37.001 "num_base_bdevs": 3, 00:14:37.001 "num_base_bdevs_discovered": 3, 00:14:37.001 "num_base_bdevs_operational": 3, 00:14:37.001 "base_bdevs_list": [ 00:14:37.001 { 00:14:37.001 "name": "BaseBdev1", 00:14:37.001 "uuid": "bcb7eb92-019f-5722-a7e7-24a6c0015aef", 00:14:37.001 "is_configured": true, 00:14:37.001 "data_offset": 0, 00:14:37.001 "data_size": 65536 00:14:37.001 }, 00:14:37.001 { 00:14:37.001 "name": "BaseBdev2", 00:14:37.001 "uuid": "b2c0c0cc-8ffd-5a8b-a3e8-2a7fed9619e7", 00:14:37.001 "is_configured": true, 00:14:37.001 "data_offset": 0, 00:14:37.001 "data_size": 65536 00:14:37.001 }, 00:14:37.001 { 00:14:37.001 "name": "BaseBdev3", 00:14:37.001 "uuid": "7c30dcf2-37d6-5a03-9b07-cd7297d5d476", 00:14:37.001 "is_configured": true, 00:14:37.001 "data_offset": 0, 00:14:37.001 "data_size": 65536 00:14:37.001 } 00:14:37.001 ] 00:14:37.001 }' 00:14:37.001 20:10:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:37.001 20:10:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.572 20:10:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:37.572 20:10:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:37.572 20:10:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.572 20:10:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.572 [2024-12-08 20:10:09.261732] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:37.572 20:10:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.572 20:10:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=131072 00:14:37.572 20:10:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.572 20:10:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.572 20:10:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.572 20:10:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:37.572 20:10:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.572 20:10:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:14:37.572 20:10:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:14:37.572 20:10:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:14:37.572 20:10:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:14:37.572 20:10:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:14:37.572 20:10:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:37.572 20:10:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:14:37.572 20:10:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:37.572 20:10:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:37.572 20:10:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:37.572 20:10:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:14:37.572 20:10:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:37.572 20:10:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:37.572 20:10:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:14:37.572 [2024-12-08 20:10:09.529131] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:14:37.832 /dev/nbd0 00:14:37.832 20:10:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:37.832 20:10:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:37.832 20:10:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:37.832 20:10:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:14:37.832 20:10:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:37.832 20:10:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:37.832 20:10:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:37.832 20:10:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:14:37.833 20:10:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:37.833 20:10:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:37.833 20:10:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:37.833 1+0 records in 00:14:37.833 1+0 records out 00:14:37.833 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000503534 s, 8.1 MB/s 00:14:37.833 20:10:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:37.833 20:10:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:14:37.833 20:10:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:37.833 20:10:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:37.833 20:10:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:14:37.833 20:10:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:37.833 20:10:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:37.833 20:10:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:14:37.833 20:10:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:14:37.833 20:10:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 128 00:14:37.833 20:10:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:14:38.093 512+0 records in 00:14:38.093 512+0 records out 00:14:38.093 67108864 bytes (67 MB, 64 MiB) copied, 0.359016 s, 187 MB/s 00:14:38.093 20:10:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:38.093 20:10:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:38.093 20:10:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:38.093 20:10:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:38.093 20:10:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:14:38.093 20:10:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:38.093 20:10:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:38.353 20:10:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:38.353 [2024-12-08 20:10:10.178084] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:38.353 20:10:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:38.353 20:10:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:38.353 20:10:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:38.353 20:10:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:38.353 20:10:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:38.353 20:10:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:38.353 20:10:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:38.353 20:10:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:38.353 20:10:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.353 20:10:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.353 [2024-12-08 20:10:10.189841] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:38.353 20:10:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.353 20:10:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:38.353 20:10:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:38.353 20:10:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:38.353 20:10:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:38.353 20:10:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:38.353 20:10:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:38.353 20:10:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:38.353 20:10:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:38.353 20:10:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:38.353 20:10:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:38.353 20:10:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:38.353 20:10:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.353 20:10:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.353 20:10:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.353 20:10:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.353 20:10:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:38.353 "name": "raid_bdev1", 00:14:38.353 "uuid": "94973591-e7ac-4f37-8d82-cc22e6767392", 00:14:38.353 "strip_size_kb": 64, 00:14:38.353 "state": "online", 00:14:38.353 "raid_level": "raid5f", 00:14:38.353 "superblock": false, 00:14:38.353 "num_base_bdevs": 3, 00:14:38.353 "num_base_bdevs_discovered": 2, 00:14:38.353 "num_base_bdevs_operational": 2, 00:14:38.353 "base_bdevs_list": [ 00:14:38.353 { 00:14:38.353 "name": null, 00:14:38.353 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:38.353 "is_configured": false, 00:14:38.353 "data_offset": 0, 00:14:38.353 "data_size": 65536 00:14:38.353 }, 00:14:38.353 { 00:14:38.353 "name": "BaseBdev2", 00:14:38.353 "uuid": "b2c0c0cc-8ffd-5a8b-a3e8-2a7fed9619e7", 00:14:38.353 "is_configured": true, 00:14:38.353 "data_offset": 0, 00:14:38.353 "data_size": 65536 00:14:38.353 }, 00:14:38.353 { 00:14:38.353 "name": "BaseBdev3", 00:14:38.353 "uuid": "7c30dcf2-37d6-5a03-9b07-cd7297d5d476", 00:14:38.353 "is_configured": true, 00:14:38.353 "data_offset": 0, 00:14:38.353 "data_size": 65536 00:14:38.353 } 00:14:38.353 ] 00:14:38.353 }' 00:14:38.353 20:10:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:38.353 20:10:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.933 20:10:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:38.933 20:10:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.933 20:10:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.933 [2024-12-08 20:10:10.613065] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:38.933 [2024-12-08 20:10:10.629037] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b680 00:14:38.933 20:10:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.933 20:10:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:38.933 [2024-12-08 20:10:10.636209] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:39.964 20:10:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:39.964 20:10:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:39.964 20:10:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:39.964 20:10:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:39.964 20:10:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:39.964 20:10:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:39.964 20:10:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.964 20:10:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:39.964 20:10:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.964 20:10:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.964 20:10:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:39.964 "name": "raid_bdev1", 00:14:39.964 "uuid": "94973591-e7ac-4f37-8d82-cc22e6767392", 00:14:39.964 "strip_size_kb": 64, 00:14:39.964 "state": "online", 00:14:39.964 "raid_level": "raid5f", 00:14:39.964 "superblock": false, 00:14:39.964 "num_base_bdevs": 3, 00:14:39.964 "num_base_bdevs_discovered": 3, 00:14:39.964 "num_base_bdevs_operational": 3, 00:14:39.964 "process": { 00:14:39.964 "type": "rebuild", 00:14:39.964 "target": "spare", 00:14:39.964 "progress": { 00:14:39.964 "blocks": 20480, 00:14:39.964 "percent": 15 00:14:39.964 } 00:14:39.964 }, 00:14:39.964 "base_bdevs_list": [ 00:14:39.964 { 00:14:39.964 "name": "spare", 00:14:39.964 "uuid": "56f64813-ee8f-5893-a976-87263f579f5a", 00:14:39.964 "is_configured": true, 00:14:39.964 "data_offset": 0, 00:14:39.964 "data_size": 65536 00:14:39.964 }, 00:14:39.964 { 00:14:39.964 "name": "BaseBdev2", 00:14:39.964 "uuid": "b2c0c0cc-8ffd-5a8b-a3e8-2a7fed9619e7", 00:14:39.964 "is_configured": true, 00:14:39.964 "data_offset": 0, 00:14:39.964 "data_size": 65536 00:14:39.964 }, 00:14:39.964 { 00:14:39.964 "name": "BaseBdev3", 00:14:39.964 "uuid": "7c30dcf2-37d6-5a03-9b07-cd7297d5d476", 00:14:39.964 "is_configured": true, 00:14:39.964 "data_offset": 0, 00:14:39.964 "data_size": 65536 00:14:39.964 } 00:14:39.964 ] 00:14:39.964 }' 00:14:39.964 20:10:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:39.964 20:10:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:39.964 20:10:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:39.964 20:10:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:39.964 20:10:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:39.964 20:10:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.964 20:10:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.964 [2024-12-08 20:10:11.767495] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:39.964 [2024-12-08 20:10:11.844087] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:39.964 [2024-12-08 20:10:11.844205] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:39.964 [2024-12-08 20:10:11.844246] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:39.964 [2024-12-08 20:10:11.844258] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:39.964 20:10:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.964 20:10:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:39.964 20:10:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:39.964 20:10:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:39.964 20:10:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:39.964 20:10:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:39.964 20:10:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:39.964 20:10:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:39.964 20:10:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:39.964 20:10:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:39.964 20:10:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:39.964 20:10:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:39.964 20:10:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.964 20:10:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.964 20:10:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:39.964 20:10:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.964 20:10:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:39.964 "name": "raid_bdev1", 00:14:39.964 "uuid": "94973591-e7ac-4f37-8d82-cc22e6767392", 00:14:39.964 "strip_size_kb": 64, 00:14:39.964 "state": "online", 00:14:39.964 "raid_level": "raid5f", 00:14:39.964 "superblock": false, 00:14:39.964 "num_base_bdevs": 3, 00:14:39.964 "num_base_bdevs_discovered": 2, 00:14:39.964 "num_base_bdevs_operational": 2, 00:14:39.964 "base_bdevs_list": [ 00:14:39.964 { 00:14:39.964 "name": null, 00:14:39.964 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:39.964 "is_configured": false, 00:14:39.964 "data_offset": 0, 00:14:39.964 "data_size": 65536 00:14:39.964 }, 00:14:39.964 { 00:14:39.964 "name": "BaseBdev2", 00:14:39.964 "uuid": "b2c0c0cc-8ffd-5a8b-a3e8-2a7fed9619e7", 00:14:39.964 "is_configured": true, 00:14:39.964 "data_offset": 0, 00:14:39.964 "data_size": 65536 00:14:39.964 }, 00:14:39.964 { 00:14:39.964 "name": "BaseBdev3", 00:14:39.965 "uuid": "7c30dcf2-37d6-5a03-9b07-cd7297d5d476", 00:14:39.965 "is_configured": true, 00:14:39.965 "data_offset": 0, 00:14:39.965 "data_size": 65536 00:14:39.965 } 00:14:39.965 ] 00:14:39.965 }' 00:14:39.965 20:10:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:39.965 20:10:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.533 20:10:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:40.534 20:10:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:40.534 20:10:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:40.534 20:10:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:40.534 20:10:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:40.534 20:10:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:40.534 20:10:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:40.534 20:10:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.534 20:10:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.534 20:10:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.534 20:10:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:40.534 "name": "raid_bdev1", 00:14:40.534 "uuid": "94973591-e7ac-4f37-8d82-cc22e6767392", 00:14:40.534 "strip_size_kb": 64, 00:14:40.534 "state": "online", 00:14:40.534 "raid_level": "raid5f", 00:14:40.534 "superblock": false, 00:14:40.534 "num_base_bdevs": 3, 00:14:40.534 "num_base_bdevs_discovered": 2, 00:14:40.534 "num_base_bdevs_operational": 2, 00:14:40.534 "base_bdevs_list": [ 00:14:40.534 { 00:14:40.534 "name": null, 00:14:40.534 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:40.534 "is_configured": false, 00:14:40.534 "data_offset": 0, 00:14:40.534 "data_size": 65536 00:14:40.534 }, 00:14:40.534 { 00:14:40.534 "name": "BaseBdev2", 00:14:40.534 "uuid": "b2c0c0cc-8ffd-5a8b-a3e8-2a7fed9619e7", 00:14:40.534 "is_configured": true, 00:14:40.534 "data_offset": 0, 00:14:40.534 "data_size": 65536 00:14:40.534 }, 00:14:40.534 { 00:14:40.534 "name": "BaseBdev3", 00:14:40.534 "uuid": "7c30dcf2-37d6-5a03-9b07-cd7297d5d476", 00:14:40.534 "is_configured": true, 00:14:40.534 "data_offset": 0, 00:14:40.534 "data_size": 65536 00:14:40.534 } 00:14:40.534 ] 00:14:40.534 }' 00:14:40.534 20:10:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:40.534 20:10:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:40.534 20:10:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:40.534 20:10:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:40.534 20:10:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:40.534 20:10:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.534 20:10:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.534 [2024-12-08 20:10:12.353805] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:40.534 [2024-12-08 20:10:12.369315] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:14:40.534 20:10:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.534 20:10:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:40.534 [2024-12-08 20:10:12.376418] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:41.473 20:10:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:41.473 20:10:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:41.473 20:10:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:41.473 20:10:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:41.473 20:10:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:41.474 20:10:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:41.474 20:10:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.474 20:10:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:41.474 20:10:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.474 20:10:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.474 20:10:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:41.474 "name": "raid_bdev1", 00:14:41.474 "uuid": "94973591-e7ac-4f37-8d82-cc22e6767392", 00:14:41.474 "strip_size_kb": 64, 00:14:41.474 "state": "online", 00:14:41.474 "raid_level": "raid5f", 00:14:41.474 "superblock": false, 00:14:41.474 "num_base_bdevs": 3, 00:14:41.474 "num_base_bdevs_discovered": 3, 00:14:41.474 "num_base_bdevs_operational": 3, 00:14:41.474 "process": { 00:14:41.474 "type": "rebuild", 00:14:41.474 "target": "spare", 00:14:41.474 "progress": { 00:14:41.474 "blocks": 20480, 00:14:41.474 "percent": 15 00:14:41.474 } 00:14:41.474 }, 00:14:41.474 "base_bdevs_list": [ 00:14:41.474 { 00:14:41.474 "name": "spare", 00:14:41.474 "uuid": "56f64813-ee8f-5893-a976-87263f579f5a", 00:14:41.474 "is_configured": true, 00:14:41.474 "data_offset": 0, 00:14:41.474 "data_size": 65536 00:14:41.474 }, 00:14:41.474 { 00:14:41.474 "name": "BaseBdev2", 00:14:41.474 "uuid": "b2c0c0cc-8ffd-5a8b-a3e8-2a7fed9619e7", 00:14:41.474 "is_configured": true, 00:14:41.474 "data_offset": 0, 00:14:41.474 "data_size": 65536 00:14:41.474 }, 00:14:41.474 { 00:14:41.474 "name": "BaseBdev3", 00:14:41.474 "uuid": "7c30dcf2-37d6-5a03-9b07-cd7297d5d476", 00:14:41.474 "is_configured": true, 00:14:41.474 "data_offset": 0, 00:14:41.474 "data_size": 65536 00:14:41.474 } 00:14:41.474 ] 00:14:41.474 }' 00:14:41.474 20:10:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:41.734 20:10:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:41.734 20:10:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:41.734 20:10:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:41.734 20:10:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:14:41.734 20:10:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:14:41.734 20:10:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:14:41.734 20:10:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=535 00:14:41.734 20:10:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:41.734 20:10:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:41.734 20:10:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:41.734 20:10:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:41.734 20:10:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:41.734 20:10:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:41.734 20:10:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:41.734 20:10:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.734 20:10:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.734 20:10:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:41.734 20:10:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.734 20:10:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:41.734 "name": "raid_bdev1", 00:14:41.734 "uuid": "94973591-e7ac-4f37-8d82-cc22e6767392", 00:14:41.734 "strip_size_kb": 64, 00:14:41.734 "state": "online", 00:14:41.734 "raid_level": "raid5f", 00:14:41.734 "superblock": false, 00:14:41.734 "num_base_bdevs": 3, 00:14:41.734 "num_base_bdevs_discovered": 3, 00:14:41.734 "num_base_bdevs_operational": 3, 00:14:41.734 "process": { 00:14:41.734 "type": "rebuild", 00:14:41.734 "target": "spare", 00:14:41.734 "progress": { 00:14:41.734 "blocks": 22528, 00:14:41.734 "percent": 17 00:14:41.734 } 00:14:41.734 }, 00:14:41.734 "base_bdevs_list": [ 00:14:41.734 { 00:14:41.734 "name": "spare", 00:14:41.734 "uuid": "56f64813-ee8f-5893-a976-87263f579f5a", 00:14:41.734 "is_configured": true, 00:14:41.734 "data_offset": 0, 00:14:41.734 "data_size": 65536 00:14:41.734 }, 00:14:41.734 { 00:14:41.734 "name": "BaseBdev2", 00:14:41.734 "uuid": "b2c0c0cc-8ffd-5a8b-a3e8-2a7fed9619e7", 00:14:41.734 "is_configured": true, 00:14:41.734 "data_offset": 0, 00:14:41.734 "data_size": 65536 00:14:41.734 }, 00:14:41.734 { 00:14:41.734 "name": "BaseBdev3", 00:14:41.734 "uuid": "7c30dcf2-37d6-5a03-9b07-cd7297d5d476", 00:14:41.734 "is_configured": true, 00:14:41.734 "data_offset": 0, 00:14:41.734 "data_size": 65536 00:14:41.734 } 00:14:41.734 ] 00:14:41.734 }' 00:14:41.734 20:10:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:41.734 20:10:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:41.734 20:10:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:41.734 20:10:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:41.734 20:10:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:42.674 20:10:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:42.674 20:10:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:42.674 20:10:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:42.674 20:10:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:42.674 20:10:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:42.674 20:10:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:42.674 20:10:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.674 20:10:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:42.674 20:10:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.674 20:10:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.674 20:10:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.674 20:10:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:42.674 "name": "raid_bdev1", 00:14:42.674 "uuid": "94973591-e7ac-4f37-8d82-cc22e6767392", 00:14:42.674 "strip_size_kb": 64, 00:14:42.674 "state": "online", 00:14:42.674 "raid_level": "raid5f", 00:14:42.674 "superblock": false, 00:14:42.674 "num_base_bdevs": 3, 00:14:42.674 "num_base_bdevs_discovered": 3, 00:14:42.674 "num_base_bdevs_operational": 3, 00:14:42.674 "process": { 00:14:42.674 "type": "rebuild", 00:14:42.674 "target": "spare", 00:14:42.674 "progress": { 00:14:42.674 "blocks": 45056, 00:14:42.674 "percent": 34 00:14:42.674 } 00:14:42.674 }, 00:14:42.674 "base_bdevs_list": [ 00:14:42.674 { 00:14:42.674 "name": "spare", 00:14:42.674 "uuid": "56f64813-ee8f-5893-a976-87263f579f5a", 00:14:42.674 "is_configured": true, 00:14:42.674 "data_offset": 0, 00:14:42.674 "data_size": 65536 00:14:42.674 }, 00:14:42.674 { 00:14:42.675 "name": "BaseBdev2", 00:14:42.675 "uuid": "b2c0c0cc-8ffd-5a8b-a3e8-2a7fed9619e7", 00:14:42.675 "is_configured": true, 00:14:42.675 "data_offset": 0, 00:14:42.675 "data_size": 65536 00:14:42.675 }, 00:14:42.675 { 00:14:42.675 "name": "BaseBdev3", 00:14:42.675 "uuid": "7c30dcf2-37d6-5a03-9b07-cd7297d5d476", 00:14:42.675 "is_configured": true, 00:14:42.675 "data_offset": 0, 00:14:42.675 "data_size": 65536 00:14:42.675 } 00:14:42.675 ] 00:14:42.675 }' 00:14:42.675 20:10:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:42.934 20:10:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:42.934 20:10:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:42.934 20:10:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:42.934 20:10:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:43.872 20:10:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:43.872 20:10:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:43.872 20:10:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:43.872 20:10:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:43.872 20:10:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:43.872 20:10:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:43.872 20:10:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:43.872 20:10:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.872 20:10:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:43.872 20:10:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.872 20:10:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.872 20:10:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:43.872 "name": "raid_bdev1", 00:14:43.872 "uuid": "94973591-e7ac-4f37-8d82-cc22e6767392", 00:14:43.872 "strip_size_kb": 64, 00:14:43.872 "state": "online", 00:14:43.872 "raid_level": "raid5f", 00:14:43.872 "superblock": false, 00:14:43.872 "num_base_bdevs": 3, 00:14:43.872 "num_base_bdevs_discovered": 3, 00:14:43.872 "num_base_bdevs_operational": 3, 00:14:43.872 "process": { 00:14:43.872 "type": "rebuild", 00:14:43.872 "target": "spare", 00:14:43.872 "progress": { 00:14:43.872 "blocks": 67584, 00:14:43.872 "percent": 51 00:14:43.872 } 00:14:43.872 }, 00:14:43.872 "base_bdevs_list": [ 00:14:43.872 { 00:14:43.872 "name": "spare", 00:14:43.872 "uuid": "56f64813-ee8f-5893-a976-87263f579f5a", 00:14:43.872 "is_configured": true, 00:14:43.872 "data_offset": 0, 00:14:43.872 "data_size": 65536 00:14:43.872 }, 00:14:43.872 { 00:14:43.872 "name": "BaseBdev2", 00:14:43.872 "uuid": "b2c0c0cc-8ffd-5a8b-a3e8-2a7fed9619e7", 00:14:43.872 "is_configured": true, 00:14:43.872 "data_offset": 0, 00:14:43.872 "data_size": 65536 00:14:43.872 }, 00:14:43.872 { 00:14:43.872 "name": "BaseBdev3", 00:14:43.872 "uuid": "7c30dcf2-37d6-5a03-9b07-cd7297d5d476", 00:14:43.872 "is_configured": true, 00:14:43.872 "data_offset": 0, 00:14:43.872 "data_size": 65536 00:14:43.872 } 00:14:43.872 ] 00:14:43.872 }' 00:14:43.872 20:10:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:43.872 20:10:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:43.872 20:10:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:44.132 20:10:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:44.132 20:10:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:45.072 20:10:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:45.072 20:10:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:45.072 20:10:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:45.072 20:10:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:45.072 20:10:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:45.072 20:10:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:45.072 20:10:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.072 20:10:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:45.072 20:10:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.072 20:10:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.072 20:10:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.072 20:10:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:45.072 "name": "raid_bdev1", 00:14:45.072 "uuid": "94973591-e7ac-4f37-8d82-cc22e6767392", 00:14:45.072 "strip_size_kb": 64, 00:14:45.072 "state": "online", 00:14:45.072 "raid_level": "raid5f", 00:14:45.072 "superblock": false, 00:14:45.072 "num_base_bdevs": 3, 00:14:45.072 "num_base_bdevs_discovered": 3, 00:14:45.072 "num_base_bdevs_operational": 3, 00:14:45.072 "process": { 00:14:45.072 "type": "rebuild", 00:14:45.072 "target": "spare", 00:14:45.072 "progress": { 00:14:45.072 "blocks": 90112, 00:14:45.072 "percent": 68 00:14:45.072 } 00:14:45.072 }, 00:14:45.072 "base_bdevs_list": [ 00:14:45.072 { 00:14:45.072 "name": "spare", 00:14:45.072 "uuid": "56f64813-ee8f-5893-a976-87263f579f5a", 00:14:45.072 "is_configured": true, 00:14:45.072 "data_offset": 0, 00:14:45.072 "data_size": 65536 00:14:45.072 }, 00:14:45.072 { 00:14:45.072 "name": "BaseBdev2", 00:14:45.072 "uuid": "b2c0c0cc-8ffd-5a8b-a3e8-2a7fed9619e7", 00:14:45.072 "is_configured": true, 00:14:45.072 "data_offset": 0, 00:14:45.072 "data_size": 65536 00:14:45.072 }, 00:14:45.072 { 00:14:45.072 "name": "BaseBdev3", 00:14:45.072 "uuid": "7c30dcf2-37d6-5a03-9b07-cd7297d5d476", 00:14:45.072 "is_configured": true, 00:14:45.072 "data_offset": 0, 00:14:45.072 "data_size": 65536 00:14:45.072 } 00:14:45.072 ] 00:14:45.072 }' 00:14:45.072 20:10:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:45.072 20:10:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:45.072 20:10:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:45.072 20:10:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:45.072 20:10:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:46.456 20:10:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:46.456 20:10:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:46.456 20:10:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:46.456 20:10:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:46.456 20:10:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:46.456 20:10:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:46.456 20:10:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.456 20:10:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.456 20:10:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:46.456 20:10:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.456 20:10:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.456 20:10:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:46.456 "name": "raid_bdev1", 00:14:46.456 "uuid": "94973591-e7ac-4f37-8d82-cc22e6767392", 00:14:46.456 "strip_size_kb": 64, 00:14:46.456 "state": "online", 00:14:46.456 "raid_level": "raid5f", 00:14:46.456 "superblock": false, 00:14:46.456 "num_base_bdevs": 3, 00:14:46.456 "num_base_bdevs_discovered": 3, 00:14:46.456 "num_base_bdevs_operational": 3, 00:14:46.456 "process": { 00:14:46.456 "type": "rebuild", 00:14:46.456 "target": "spare", 00:14:46.456 "progress": { 00:14:46.456 "blocks": 114688, 00:14:46.456 "percent": 87 00:14:46.456 } 00:14:46.456 }, 00:14:46.456 "base_bdevs_list": [ 00:14:46.456 { 00:14:46.456 "name": "spare", 00:14:46.456 "uuid": "56f64813-ee8f-5893-a976-87263f579f5a", 00:14:46.456 "is_configured": true, 00:14:46.456 "data_offset": 0, 00:14:46.456 "data_size": 65536 00:14:46.456 }, 00:14:46.456 { 00:14:46.456 "name": "BaseBdev2", 00:14:46.456 "uuid": "b2c0c0cc-8ffd-5a8b-a3e8-2a7fed9619e7", 00:14:46.456 "is_configured": true, 00:14:46.456 "data_offset": 0, 00:14:46.456 "data_size": 65536 00:14:46.456 }, 00:14:46.456 { 00:14:46.456 "name": "BaseBdev3", 00:14:46.456 "uuid": "7c30dcf2-37d6-5a03-9b07-cd7297d5d476", 00:14:46.456 "is_configured": true, 00:14:46.456 "data_offset": 0, 00:14:46.456 "data_size": 65536 00:14:46.456 } 00:14:46.456 ] 00:14:46.456 }' 00:14:46.456 20:10:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:46.456 20:10:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:46.456 20:10:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:46.456 20:10:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:46.456 20:10:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:47.026 [2024-12-08 20:10:18.814823] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:47.026 [2024-12-08 20:10:18.814892] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:47.026 [2024-12-08 20:10:18.814931] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:47.285 20:10:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:47.285 20:10:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:47.285 20:10:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:47.285 20:10:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:47.285 20:10:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:47.285 20:10:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:47.285 20:10:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:47.285 20:10:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:47.285 20:10:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.285 20:10:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.285 20:10:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.285 20:10:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:47.285 "name": "raid_bdev1", 00:14:47.285 "uuid": "94973591-e7ac-4f37-8d82-cc22e6767392", 00:14:47.285 "strip_size_kb": 64, 00:14:47.285 "state": "online", 00:14:47.285 "raid_level": "raid5f", 00:14:47.285 "superblock": false, 00:14:47.285 "num_base_bdevs": 3, 00:14:47.285 "num_base_bdevs_discovered": 3, 00:14:47.285 "num_base_bdevs_operational": 3, 00:14:47.285 "base_bdevs_list": [ 00:14:47.285 { 00:14:47.285 "name": "spare", 00:14:47.285 "uuid": "56f64813-ee8f-5893-a976-87263f579f5a", 00:14:47.285 "is_configured": true, 00:14:47.285 "data_offset": 0, 00:14:47.285 "data_size": 65536 00:14:47.285 }, 00:14:47.285 { 00:14:47.285 "name": "BaseBdev2", 00:14:47.285 "uuid": "b2c0c0cc-8ffd-5a8b-a3e8-2a7fed9619e7", 00:14:47.285 "is_configured": true, 00:14:47.285 "data_offset": 0, 00:14:47.285 "data_size": 65536 00:14:47.285 }, 00:14:47.285 { 00:14:47.285 "name": "BaseBdev3", 00:14:47.285 "uuid": "7c30dcf2-37d6-5a03-9b07-cd7297d5d476", 00:14:47.285 "is_configured": true, 00:14:47.285 "data_offset": 0, 00:14:47.285 "data_size": 65536 00:14:47.285 } 00:14:47.285 ] 00:14:47.285 }' 00:14:47.285 20:10:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:47.285 20:10:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:47.285 20:10:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:47.545 20:10:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:47.545 20:10:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:14:47.545 20:10:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:47.545 20:10:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:47.545 20:10:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:47.545 20:10:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:47.545 20:10:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:47.545 20:10:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:47.545 20:10:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:47.545 20:10:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.545 20:10:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.545 20:10:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.545 20:10:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:47.545 "name": "raid_bdev1", 00:14:47.545 "uuid": "94973591-e7ac-4f37-8d82-cc22e6767392", 00:14:47.545 "strip_size_kb": 64, 00:14:47.545 "state": "online", 00:14:47.545 "raid_level": "raid5f", 00:14:47.545 "superblock": false, 00:14:47.545 "num_base_bdevs": 3, 00:14:47.545 "num_base_bdevs_discovered": 3, 00:14:47.545 "num_base_bdevs_operational": 3, 00:14:47.545 "base_bdevs_list": [ 00:14:47.545 { 00:14:47.545 "name": "spare", 00:14:47.545 "uuid": "56f64813-ee8f-5893-a976-87263f579f5a", 00:14:47.545 "is_configured": true, 00:14:47.545 "data_offset": 0, 00:14:47.545 "data_size": 65536 00:14:47.545 }, 00:14:47.545 { 00:14:47.545 "name": "BaseBdev2", 00:14:47.545 "uuid": "b2c0c0cc-8ffd-5a8b-a3e8-2a7fed9619e7", 00:14:47.545 "is_configured": true, 00:14:47.545 "data_offset": 0, 00:14:47.545 "data_size": 65536 00:14:47.545 }, 00:14:47.545 { 00:14:47.545 "name": "BaseBdev3", 00:14:47.545 "uuid": "7c30dcf2-37d6-5a03-9b07-cd7297d5d476", 00:14:47.545 "is_configured": true, 00:14:47.545 "data_offset": 0, 00:14:47.545 "data_size": 65536 00:14:47.545 } 00:14:47.545 ] 00:14:47.545 }' 00:14:47.545 20:10:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:47.545 20:10:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:47.545 20:10:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:47.545 20:10:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:47.545 20:10:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:47.545 20:10:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:47.545 20:10:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:47.545 20:10:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:47.545 20:10:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:47.545 20:10:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:47.545 20:10:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:47.545 20:10:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:47.545 20:10:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:47.545 20:10:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:47.545 20:10:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:47.545 20:10:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.545 20:10:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.545 20:10:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:47.545 20:10:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.545 20:10:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:47.545 "name": "raid_bdev1", 00:14:47.545 "uuid": "94973591-e7ac-4f37-8d82-cc22e6767392", 00:14:47.545 "strip_size_kb": 64, 00:14:47.545 "state": "online", 00:14:47.545 "raid_level": "raid5f", 00:14:47.545 "superblock": false, 00:14:47.545 "num_base_bdevs": 3, 00:14:47.545 "num_base_bdevs_discovered": 3, 00:14:47.545 "num_base_bdevs_operational": 3, 00:14:47.545 "base_bdevs_list": [ 00:14:47.545 { 00:14:47.545 "name": "spare", 00:14:47.545 "uuid": "56f64813-ee8f-5893-a976-87263f579f5a", 00:14:47.545 "is_configured": true, 00:14:47.545 "data_offset": 0, 00:14:47.545 "data_size": 65536 00:14:47.545 }, 00:14:47.545 { 00:14:47.545 "name": "BaseBdev2", 00:14:47.545 "uuid": "b2c0c0cc-8ffd-5a8b-a3e8-2a7fed9619e7", 00:14:47.545 "is_configured": true, 00:14:47.545 "data_offset": 0, 00:14:47.545 "data_size": 65536 00:14:47.545 }, 00:14:47.545 { 00:14:47.545 "name": "BaseBdev3", 00:14:47.545 "uuid": "7c30dcf2-37d6-5a03-9b07-cd7297d5d476", 00:14:47.545 "is_configured": true, 00:14:47.545 "data_offset": 0, 00:14:47.545 "data_size": 65536 00:14:47.545 } 00:14:47.545 ] 00:14:47.545 }' 00:14:47.545 20:10:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:47.545 20:10:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.114 20:10:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:48.114 20:10:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.114 20:10:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.114 [2024-12-08 20:10:19.815285] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:48.114 [2024-12-08 20:10:19.815357] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:48.114 [2024-12-08 20:10:19.815460] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:48.114 [2024-12-08 20:10:19.815596] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:48.114 [2024-12-08 20:10:19.815652] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:48.114 20:10:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.114 20:10:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:14:48.114 20:10:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.114 20:10:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.114 20:10:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.114 20:10:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.114 20:10:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:48.114 20:10:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:48.114 20:10:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:14:48.114 20:10:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:14:48.114 20:10:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:48.114 20:10:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:14:48.114 20:10:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:48.114 20:10:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:48.114 20:10:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:48.114 20:10:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:14:48.114 20:10:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:48.114 20:10:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:48.114 20:10:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:14:48.114 /dev/nbd0 00:14:48.114 20:10:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:48.114 20:10:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:48.373 20:10:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:48.373 20:10:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:14:48.373 20:10:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:48.373 20:10:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:48.373 20:10:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:48.373 20:10:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:14:48.373 20:10:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:48.373 20:10:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:48.373 20:10:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:48.373 1+0 records in 00:14:48.373 1+0 records out 00:14:48.373 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000302275 s, 13.6 MB/s 00:14:48.373 20:10:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:48.373 20:10:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:14:48.373 20:10:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:48.373 20:10:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:48.373 20:10:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:14:48.373 20:10:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:48.373 20:10:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:48.373 20:10:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:14:48.373 /dev/nbd1 00:14:48.373 20:10:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:48.373 20:10:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:48.373 20:10:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:48.373 20:10:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:14:48.632 20:10:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:48.632 20:10:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:48.632 20:10:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:48.632 20:10:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:14:48.632 20:10:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:48.632 20:10:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:48.632 20:10:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:48.632 1+0 records in 00:14:48.632 1+0 records out 00:14:48.632 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000410357 s, 10.0 MB/s 00:14:48.632 20:10:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:48.632 20:10:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:14:48.632 20:10:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:48.632 20:10:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:48.632 20:10:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:14:48.632 20:10:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:48.632 20:10:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:48.632 20:10:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:14:48.632 20:10:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:14:48.632 20:10:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:48.632 20:10:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:48.632 20:10:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:48.632 20:10:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:14:48.632 20:10:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:48.632 20:10:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:48.892 20:10:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:48.892 20:10:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:48.892 20:10:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:48.892 20:10:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:48.892 20:10:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:48.892 20:10:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:48.892 20:10:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:48.892 20:10:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:48.892 20:10:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:48.892 20:10:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:49.150 20:10:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:49.150 20:10:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:49.150 20:10:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:49.150 20:10:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:49.150 20:10:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:49.150 20:10:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:49.150 20:10:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:49.150 20:10:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:49.150 20:10:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:14:49.150 20:10:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 81263 00:14:49.150 20:10:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 81263 ']' 00:14:49.150 20:10:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 81263 00:14:49.150 20:10:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:14:49.150 20:10:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:49.150 20:10:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81263 00:14:49.150 20:10:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:49.150 killing process with pid 81263 00:14:49.150 Received shutdown signal, test time was about 60.000000 seconds 00:14:49.150 00:14:49.150 Latency(us) 00:14:49.150 [2024-12-08T20:10:21.128Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:49.150 [2024-12-08T20:10:21.128Z] =================================================================================================================== 00:14:49.150 [2024-12-08T20:10:21.128Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:49.150 20:10:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:49.150 20:10:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81263' 00:14:49.150 20:10:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@973 -- # kill 81263 00:14:49.150 [2024-12-08 20:10:21.014734] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:49.150 20:10:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@978 -- # wait 81263 00:14:49.718 [2024-12-08 20:10:21.391568] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:50.656 20:10:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:14:50.656 00:14:50.656 real 0m14.769s 00:14:50.656 user 0m17.844s 00:14:50.656 sys 0m1.964s 00:14:50.656 20:10:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:50.656 ************************************ 00:14:50.656 END TEST raid5f_rebuild_test 00:14:50.656 ************************************ 00:14:50.656 20:10:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.656 20:10:22 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false true 00:14:50.656 20:10:22 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:14:50.656 20:10:22 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:50.656 20:10:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:50.656 ************************************ 00:14:50.656 START TEST raid5f_rebuild_test_sb 00:14:50.656 ************************************ 00:14:50.656 20:10:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 3 true false true 00:14:50.656 20:10:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:14:50.656 20:10:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:14:50.656 20:10:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:14:50.656 20:10:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:14:50.656 20:10:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:50.656 20:10:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:50.656 20:10:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:50.656 20:10:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:50.656 20:10:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:50.656 20:10:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:50.656 20:10:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:50.656 20:10:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:50.656 20:10:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:50.656 20:10:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:14:50.656 20:10:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:50.656 20:10:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:50.656 20:10:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:50.656 20:10:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:50.656 20:10:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:50.656 20:10:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:50.656 20:10:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:50.656 20:10:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:50.656 20:10:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:50.656 20:10:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:14:50.656 20:10:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:14:50.656 20:10:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:14:50.656 20:10:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:14:50.656 20:10:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:14:50.656 20:10:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:14:50.656 20:10:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=81705 00:14:50.656 20:10:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 81705 00:14:50.656 20:10:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:50.656 20:10:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 81705 ']' 00:14:50.656 20:10:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:50.656 20:10:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:50.656 20:10:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:50.656 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:50.656 20:10:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:50.656 20:10:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.656 [2024-12-08 20:10:22.616078] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:14:50.656 [2024-12-08 20:10:22.616280] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:14:50.656 Zero copy mechanism will not be used. 00:14:50.656 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81705 ] 00:14:50.916 [2024-12-08 20:10:22.788360] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:50.916 [2024-12-08 20:10:22.888503] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:51.176 [2024-12-08 20:10:23.077354] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:51.176 [2024-12-08 20:10:23.077486] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:51.748 20:10:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:51.748 20:10:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:14:51.748 20:10:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:51.748 20:10:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:51.748 20:10:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.748 20:10:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.748 BaseBdev1_malloc 00:14:51.748 20:10:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.748 20:10:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:51.748 20:10:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.748 20:10:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.748 [2024-12-08 20:10:23.474408] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:51.748 [2024-12-08 20:10:23.474504] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:51.748 [2024-12-08 20:10:23.474530] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:51.748 [2024-12-08 20:10:23.474541] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:51.748 [2024-12-08 20:10:23.476644] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:51.748 [2024-12-08 20:10:23.476715] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:51.748 BaseBdev1 00:14:51.748 20:10:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.748 20:10:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:51.748 20:10:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:51.748 20:10:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.748 20:10:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.748 BaseBdev2_malloc 00:14:51.748 20:10:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.748 20:10:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:51.748 20:10:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.748 20:10:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.748 [2024-12-08 20:10:23.527534] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:51.748 [2024-12-08 20:10:23.527639] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:51.748 [2024-12-08 20:10:23.527683] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:51.748 [2024-12-08 20:10:23.527717] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:51.748 [2024-12-08 20:10:23.529712] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:51.748 [2024-12-08 20:10:23.529779] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:51.748 BaseBdev2 00:14:51.748 20:10:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.748 20:10:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:51.748 20:10:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:51.748 20:10:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.748 20:10:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.748 BaseBdev3_malloc 00:14:51.748 20:10:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.748 20:10:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:14:51.748 20:10:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.748 20:10:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.748 [2024-12-08 20:10:23.616259] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:14:51.748 [2024-12-08 20:10:23.616375] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:51.748 [2024-12-08 20:10:23.616414] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:51.748 [2024-12-08 20:10:23.616428] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:51.748 [2024-12-08 20:10:23.618392] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:51.748 [2024-12-08 20:10:23.618430] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:51.748 BaseBdev3 00:14:51.748 20:10:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.748 20:10:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:51.748 20:10:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.748 20:10:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.748 spare_malloc 00:14:51.748 20:10:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.748 20:10:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:51.748 20:10:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.748 20:10:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.748 spare_delay 00:14:51.748 20:10:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.748 20:10:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:51.749 20:10:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.749 20:10:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.749 [2024-12-08 20:10:23.680047] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:51.749 [2024-12-08 20:10:23.680145] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:51.749 [2024-12-08 20:10:23.680180] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:14:51.749 [2024-12-08 20:10:23.680210] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:51.749 [2024-12-08 20:10:23.682235] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:51.749 [2024-12-08 20:10:23.682304] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:51.749 spare 00:14:51.749 20:10:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.749 20:10:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:14:51.749 20:10:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.749 20:10:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.749 [2024-12-08 20:10:23.692100] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:51.749 [2024-12-08 20:10:23.693837] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:51.749 [2024-12-08 20:10:23.693952] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:51.749 [2024-12-08 20:10:23.694166] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:51.749 [2024-12-08 20:10:23.694211] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:51.749 [2024-12-08 20:10:23.694461] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:14:51.749 [2024-12-08 20:10:23.699423] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:51.749 [2024-12-08 20:10:23.699447] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:51.749 [2024-12-08 20:10:23.699623] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:51.749 20:10:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.749 20:10:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:51.749 20:10:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:51.749 20:10:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:51.749 20:10:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:51.749 20:10:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:51.749 20:10:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:51.749 20:10:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:51.749 20:10:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:51.749 20:10:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:51.749 20:10:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:51.749 20:10:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:51.749 20:10:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:51.749 20:10:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.749 20:10:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.008 20:10:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.008 20:10:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:52.008 "name": "raid_bdev1", 00:14:52.008 "uuid": "b8af4a61-42f3-44c2-a91f-f2686d5ba1fc", 00:14:52.008 "strip_size_kb": 64, 00:14:52.009 "state": "online", 00:14:52.009 "raid_level": "raid5f", 00:14:52.009 "superblock": true, 00:14:52.009 "num_base_bdevs": 3, 00:14:52.009 "num_base_bdevs_discovered": 3, 00:14:52.009 "num_base_bdevs_operational": 3, 00:14:52.009 "base_bdevs_list": [ 00:14:52.009 { 00:14:52.009 "name": "BaseBdev1", 00:14:52.009 "uuid": "b9a6d9af-cb95-5446-bd08-aeeb1e80a8c1", 00:14:52.009 "is_configured": true, 00:14:52.009 "data_offset": 2048, 00:14:52.009 "data_size": 63488 00:14:52.009 }, 00:14:52.009 { 00:14:52.009 "name": "BaseBdev2", 00:14:52.009 "uuid": "c6befa29-ab15-514a-8ea1-3b819317ff18", 00:14:52.009 "is_configured": true, 00:14:52.009 "data_offset": 2048, 00:14:52.009 "data_size": 63488 00:14:52.009 }, 00:14:52.009 { 00:14:52.009 "name": "BaseBdev3", 00:14:52.009 "uuid": "d665584b-4dbe-56fe-8715-80486bd44641", 00:14:52.009 "is_configured": true, 00:14:52.009 "data_offset": 2048, 00:14:52.009 "data_size": 63488 00:14:52.009 } 00:14:52.009 ] 00:14:52.009 }' 00:14:52.009 20:10:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:52.009 20:10:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.269 20:10:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:52.269 20:10:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:52.269 20:10:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.269 20:10:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.269 [2024-12-08 20:10:24.165195] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:52.269 20:10:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.269 20:10:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=126976 00:14:52.269 20:10:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.269 20:10:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.269 20:10:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.269 20:10:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:52.269 20:10:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.529 20:10:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:14:52.529 20:10:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:14:52.529 20:10:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:14:52.529 20:10:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:14:52.529 20:10:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:14:52.529 20:10:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:52.529 20:10:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:14:52.529 20:10:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:52.529 20:10:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:52.529 20:10:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:52.529 20:10:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:14:52.529 20:10:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:52.529 20:10:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:52.529 20:10:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:14:52.529 [2024-12-08 20:10:24.432614] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:14:52.529 /dev/nbd0 00:14:52.529 20:10:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:52.530 20:10:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:52.530 20:10:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:52.530 20:10:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:14:52.530 20:10:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:52.530 20:10:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:52.530 20:10:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:52.530 20:10:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:14:52.530 20:10:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:52.530 20:10:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:52.530 20:10:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:52.790 1+0 records in 00:14:52.790 1+0 records out 00:14:52.790 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000605346 s, 6.8 MB/s 00:14:52.790 20:10:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:52.790 20:10:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:14:52.790 20:10:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:52.790 20:10:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:52.790 20:10:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:14:52.790 20:10:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:52.790 20:10:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:52.790 20:10:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:14:52.790 20:10:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:14:52.790 20:10:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 128 00:14:52.790 20:10:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:14:53.050 496+0 records in 00:14:53.050 496+0 records out 00:14:53.050 65011712 bytes (65 MB, 62 MiB) copied, 0.35358 s, 184 MB/s 00:14:53.050 20:10:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:53.050 20:10:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:53.050 20:10:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:53.050 20:10:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:53.050 20:10:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:14:53.050 20:10:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:53.050 20:10:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:53.312 20:10:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:53.312 [2024-12-08 20:10:25.093165] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:53.312 20:10:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:53.312 20:10:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:53.312 20:10:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:53.312 20:10:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:53.312 20:10:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:53.312 20:10:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:53.312 20:10:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:53.312 20:10:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:53.312 20:10:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.312 20:10:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.312 [2024-12-08 20:10:25.108814] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:53.312 20:10:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.312 20:10:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:53.312 20:10:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:53.312 20:10:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:53.312 20:10:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:53.312 20:10:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:53.312 20:10:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:53.312 20:10:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:53.312 20:10:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:53.312 20:10:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:53.312 20:10:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:53.312 20:10:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:53.312 20:10:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.312 20:10:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.312 20:10:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:53.312 20:10:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.312 20:10:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:53.312 "name": "raid_bdev1", 00:14:53.312 "uuid": "b8af4a61-42f3-44c2-a91f-f2686d5ba1fc", 00:14:53.312 "strip_size_kb": 64, 00:14:53.312 "state": "online", 00:14:53.312 "raid_level": "raid5f", 00:14:53.312 "superblock": true, 00:14:53.312 "num_base_bdevs": 3, 00:14:53.312 "num_base_bdevs_discovered": 2, 00:14:53.312 "num_base_bdevs_operational": 2, 00:14:53.312 "base_bdevs_list": [ 00:14:53.312 { 00:14:53.312 "name": null, 00:14:53.312 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:53.312 "is_configured": false, 00:14:53.312 "data_offset": 0, 00:14:53.312 "data_size": 63488 00:14:53.312 }, 00:14:53.312 { 00:14:53.312 "name": "BaseBdev2", 00:14:53.312 "uuid": "c6befa29-ab15-514a-8ea1-3b819317ff18", 00:14:53.312 "is_configured": true, 00:14:53.312 "data_offset": 2048, 00:14:53.312 "data_size": 63488 00:14:53.312 }, 00:14:53.313 { 00:14:53.313 "name": "BaseBdev3", 00:14:53.313 "uuid": "d665584b-4dbe-56fe-8715-80486bd44641", 00:14:53.313 "is_configured": true, 00:14:53.313 "data_offset": 2048, 00:14:53.313 "data_size": 63488 00:14:53.313 } 00:14:53.313 ] 00:14:53.313 }' 00:14:53.313 20:10:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:53.313 20:10:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.575 20:10:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:53.575 20:10:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.575 20:10:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.575 [2024-12-08 20:10:25.532087] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:53.575 [2024-12-08 20:10:25.548918] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000028f80 00:14:53.575 20:10:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.575 20:10:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:53.834 [2024-12-08 20:10:25.557324] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:54.772 20:10:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:54.772 20:10:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:54.772 20:10:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:54.772 20:10:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:54.772 20:10:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:54.772 20:10:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.772 20:10:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.772 20:10:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:54.772 20:10:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.772 20:10:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.772 20:10:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:54.772 "name": "raid_bdev1", 00:14:54.772 "uuid": "b8af4a61-42f3-44c2-a91f-f2686d5ba1fc", 00:14:54.772 "strip_size_kb": 64, 00:14:54.772 "state": "online", 00:14:54.772 "raid_level": "raid5f", 00:14:54.772 "superblock": true, 00:14:54.772 "num_base_bdevs": 3, 00:14:54.772 "num_base_bdevs_discovered": 3, 00:14:54.772 "num_base_bdevs_operational": 3, 00:14:54.772 "process": { 00:14:54.772 "type": "rebuild", 00:14:54.772 "target": "spare", 00:14:54.772 "progress": { 00:14:54.772 "blocks": 20480, 00:14:54.772 "percent": 16 00:14:54.772 } 00:14:54.772 }, 00:14:54.772 "base_bdevs_list": [ 00:14:54.772 { 00:14:54.772 "name": "spare", 00:14:54.772 "uuid": "d1b66ff0-026f-5083-8c06-084d3b4e3a76", 00:14:54.772 "is_configured": true, 00:14:54.772 "data_offset": 2048, 00:14:54.772 "data_size": 63488 00:14:54.772 }, 00:14:54.772 { 00:14:54.772 "name": "BaseBdev2", 00:14:54.772 "uuid": "c6befa29-ab15-514a-8ea1-3b819317ff18", 00:14:54.772 "is_configured": true, 00:14:54.772 "data_offset": 2048, 00:14:54.772 "data_size": 63488 00:14:54.772 }, 00:14:54.772 { 00:14:54.772 "name": "BaseBdev3", 00:14:54.772 "uuid": "d665584b-4dbe-56fe-8715-80486bd44641", 00:14:54.772 "is_configured": true, 00:14:54.772 "data_offset": 2048, 00:14:54.772 "data_size": 63488 00:14:54.772 } 00:14:54.772 ] 00:14:54.772 }' 00:14:54.773 20:10:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:54.773 20:10:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:54.773 20:10:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:54.773 20:10:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:54.773 20:10:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:54.773 20:10:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.773 20:10:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.773 [2024-12-08 20:10:26.708175] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:55.032 [2024-12-08 20:10:26.765063] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:55.032 [2024-12-08 20:10:26.765179] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:55.032 [2024-12-08 20:10:26.765219] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:55.032 [2024-12-08 20:10:26.765240] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:55.032 20:10:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.032 20:10:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:55.032 20:10:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:55.032 20:10:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:55.032 20:10:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:55.032 20:10:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:55.032 20:10:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:55.032 20:10:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:55.032 20:10:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:55.032 20:10:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:55.032 20:10:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:55.032 20:10:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:55.032 20:10:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:55.032 20:10:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.032 20:10:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.032 20:10:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.032 20:10:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:55.032 "name": "raid_bdev1", 00:14:55.032 "uuid": "b8af4a61-42f3-44c2-a91f-f2686d5ba1fc", 00:14:55.032 "strip_size_kb": 64, 00:14:55.032 "state": "online", 00:14:55.032 "raid_level": "raid5f", 00:14:55.032 "superblock": true, 00:14:55.032 "num_base_bdevs": 3, 00:14:55.032 "num_base_bdevs_discovered": 2, 00:14:55.032 "num_base_bdevs_operational": 2, 00:14:55.032 "base_bdevs_list": [ 00:14:55.032 { 00:14:55.032 "name": null, 00:14:55.032 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:55.032 "is_configured": false, 00:14:55.032 "data_offset": 0, 00:14:55.032 "data_size": 63488 00:14:55.032 }, 00:14:55.032 { 00:14:55.032 "name": "BaseBdev2", 00:14:55.032 "uuid": "c6befa29-ab15-514a-8ea1-3b819317ff18", 00:14:55.032 "is_configured": true, 00:14:55.032 "data_offset": 2048, 00:14:55.032 "data_size": 63488 00:14:55.032 }, 00:14:55.032 { 00:14:55.032 "name": "BaseBdev3", 00:14:55.033 "uuid": "d665584b-4dbe-56fe-8715-80486bd44641", 00:14:55.033 "is_configured": true, 00:14:55.033 "data_offset": 2048, 00:14:55.033 "data_size": 63488 00:14:55.033 } 00:14:55.033 ] 00:14:55.033 }' 00:14:55.033 20:10:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:55.033 20:10:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.293 20:10:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:55.293 20:10:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:55.293 20:10:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:55.293 20:10:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:55.293 20:10:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:55.293 20:10:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:55.293 20:10:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:55.293 20:10:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.293 20:10:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.293 20:10:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.552 20:10:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:55.552 "name": "raid_bdev1", 00:14:55.552 "uuid": "b8af4a61-42f3-44c2-a91f-f2686d5ba1fc", 00:14:55.552 "strip_size_kb": 64, 00:14:55.552 "state": "online", 00:14:55.552 "raid_level": "raid5f", 00:14:55.552 "superblock": true, 00:14:55.552 "num_base_bdevs": 3, 00:14:55.552 "num_base_bdevs_discovered": 2, 00:14:55.552 "num_base_bdevs_operational": 2, 00:14:55.552 "base_bdevs_list": [ 00:14:55.552 { 00:14:55.552 "name": null, 00:14:55.552 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:55.552 "is_configured": false, 00:14:55.552 "data_offset": 0, 00:14:55.553 "data_size": 63488 00:14:55.553 }, 00:14:55.553 { 00:14:55.553 "name": "BaseBdev2", 00:14:55.553 "uuid": "c6befa29-ab15-514a-8ea1-3b819317ff18", 00:14:55.553 "is_configured": true, 00:14:55.553 "data_offset": 2048, 00:14:55.553 "data_size": 63488 00:14:55.553 }, 00:14:55.553 { 00:14:55.553 "name": "BaseBdev3", 00:14:55.553 "uuid": "d665584b-4dbe-56fe-8715-80486bd44641", 00:14:55.553 "is_configured": true, 00:14:55.553 "data_offset": 2048, 00:14:55.553 "data_size": 63488 00:14:55.553 } 00:14:55.553 ] 00:14:55.553 }' 00:14:55.553 20:10:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:55.553 20:10:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:55.553 20:10:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:55.553 20:10:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:55.553 20:10:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:55.553 20:10:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.553 20:10:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.553 [2024-12-08 20:10:27.376291] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:55.553 [2024-12-08 20:10:27.391753] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000029050 00:14:55.553 20:10:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.553 20:10:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:55.553 [2024-12-08 20:10:27.398635] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:56.493 20:10:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:56.493 20:10:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:56.493 20:10:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:56.493 20:10:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:56.493 20:10:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:56.493 20:10:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.493 20:10:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:56.493 20:10:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.493 20:10:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.493 20:10:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.493 20:10:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:56.493 "name": "raid_bdev1", 00:14:56.493 "uuid": "b8af4a61-42f3-44c2-a91f-f2686d5ba1fc", 00:14:56.493 "strip_size_kb": 64, 00:14:56.493 "state": "online", 00:14:56.493 "raid_level": "raid5f", 00:14:56.493 "superblock": true, 00:14:56.493 "num_base_bdevs": 3, 00:14:56.493 "num_base_bdevs_discovered": 3, 00:14:56.493 "num_base_bdevs_operational": 3, 00:14:56.493 "process": { 00:14:56.493 "type": "rebuild", 00:14:56.493 "target": "spare", 00:14:56.493 "progress": { 00:14:56.493 "blocks": 20480, 00:14:56.493 "percent": 16 00:14:56.493 } 00:14:56.493 }, 00:14:56.493 "base_bdevs_list": [ 00:14:56.493 { 00:14:56.493 "name": "spare", 00:14:56.493 "uuid": "d1b66ff0-026f-5083-8c06-084d3b4e3a76", 00:14:56.493 "is_configured": true, 00:14:56.493 "data_offset": 2048, 00:14:56.493 "data_size": 63488 00:14:56.493 }, 00:14:56.493 { 00:14:56.493 "name": "BaseBdev2", 00:14:56.493 "uuid": "c6befa29-ab15-514a-8ea1-3b819317ff18", 00:14:56.493 "is_configured": true, 00:14:56.493 "data_offset": 2048, 00:14:56.493 "data_size": 63488 00:14:56.493 }, 00:14:56.493 { 00:14:56.493 "name": "BaseBdev3", 00:14:56.493 "uuid": "d665584b-4dbe-56fe-8715-80486bd44641", 00:14:56.493 "is_configured": true, 00:14:56.493 "data_offset": 2048, 00:14:56.493 "data_size": 63488 00:14:56.493 } 00:14:56.493 ] 00:14:56.493 }' 00:14:56.493 20:10:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:56.753 20:10:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:56.753 20:10:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:56.753 20:10:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:56.753 20:10:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:14:56.753 20:10:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:14:56.753 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:14:56.753 20:10:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:14:56.753 20:10:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:14:56.753 20:10:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=550 00:14:56.753 20:10:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:56.753 20:10:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:56.753 20:10:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:56.753 20:10:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:56.753 20:10:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:56.753 20:10:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:56.753 20:10:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.753 20:10:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:56.753 20:10:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.753 20:10:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.753 20:10:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.753 20:10:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:56.753 "name": "raid_bdev1", 00:14:56.753 "uuid": "b8af4a61-42f3-44c2-a91f-f2686d5ba1fc", 00:14:56.753 "strip_size_kb": 64, 00:14:56.753 "state": "online", 00:14:56.753 "raid_level": "raid5f", 00:14:56.753 "superblock": true, 00:14:56.753 "num_base_bdevs": 3, 00:14:56.753 "num_base_bdevs_discovered": 3, 00:14:56.753 "num_base_bdevs_operational": 3, 00:14:56.753 "process": { 00:14:56.753 "type": "rebuild", 00:14:56.753 "target": "spare", 00:14:56.753 "progress": { 00:14:56.753 "blocks": 22528, 00:14:56.753 "percent": 17 00:14:56.753 } 00:14:56.753 }, 00:14:56.753 "base_bdevs_list": [ 00:14:56.753 { 00:14:56.753 "name": "spare", 00:14:56.753 "uuid": "d1b66ff0-026f-5083-8c06-084d3b4e3a76", 00:14:56.753 "is_configured": true, 00:14:56.753 "data_offset": 2048, 00:14:56.753 "data_size": 63488 00:14:56.753 }, 00:14:56.753 { 00:14:56.753 "name": "BaseBdev2", 00:14:56.753 "uuid": "c6befa29-ab15-514a-8ea1-3b819317ff18", 00:14:56.753 "is_configured": true, 00:14:56.753 "data_offset": 2048, 00:14:56.753 "data_size": 63488 00:14:56.753 }, 00:14:56.753 { 00:14:56.753 "name": "BaseBdev3", 00:14:56.753 "uuid": "d665584b-4dbe-56fe-8715-80486bd44641", 00:14:56.753 "is_configured": true, 00:14:56.753 "data_offset": 2048, 00:14:56.753 "data_size": 63488 00:14:56.753 } 00:14:56.753 ] 00:14:56.753 }' 00:14:56.753 20:10:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:56.753 20:10:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:56.753 20:10:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:56.753 20:10:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:56.753 20:10:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:58.138 20:10:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:58.138 20:10:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:58.138 20:10:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:58.138 20:10:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:58.138 20:10:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:58.138 20:10:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:58.138 20:10:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.138 20:10:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:58.138 20:10:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.138 20:10:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.138 20:10:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.138 20:10:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:58.138 "name": "raid_bdev1", 00:14:58.138 "uuid": "b8af4a61-42f3-44c2-a91f-f2686d5ba1fc", 00:14:58.138 "strip_size_kb": 64, 00:14:58.138 "state": "online", 00:14:58.138 "raid_level": "raid5f", 00:14:58.138 "superblock": true, 00:14:58.138 "num_base_bdevs": 3, 00:14:58.138 "num_base_bdevs_discovered": 3, 00:14:58.138 "num_base_bdevs_operational": 3, 00:14:58.138 "process": { 00:14:58.138 "type": "rebuild", 00:14:58.138 "target": "spare", 00:14:58.138 "progress": { 00:14:58.138 "blocks": 45056, 00:14:58.138 "percent": 35 00:14:58.138 } 00:14:58.138 }, 00:14:58.138 "base_bdevs_list": [ 00:14:58.138 { 00:14:58.138 "name": "spare", 00:14:58.138 "uuid": "d1b66ff0-026f-5083-8c06-084d3b4e3a76", 00:14:58.138 "is_configured": true, 00:14:58.138 "data_offset": 2048, 00:14:58.138 "data_size": 63488 00:14:58.138 }, 00:14:58.138 { 00:14:58.138 "name": "BaseBdev2", 00:14:58.138 "uuid": "c6befa29-ab15-514a-8ea1-3b819317ff18", 00:14:58.138 "is_configured": true, 00:14:58.138 "data_offset": 2048, 00:14:58.138 "data_size": 63488 00:14:58.138 }, 00:14:58.138 { 00:14:58.138 "name": "BaseBdev3", 00:14:58.138 "uuid": "d665584b-4dbe-56fe-8715-80486bd44641", 00:14:58.138 "is_configured": true, 00:14:58.138 "data_offset": 2048, 00:14:58.138 "data_size": 63488 00:14:58.138 } 00:14:58.138 ] 00:14:58.138 }' 00:14:58.138 20:10:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:58.138 20:10:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:58.138 20:10:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:58.138 20:10:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:58.138 20:10:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:59.078 20:10:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:59.078 20:10:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:59.078 20:10:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:59.078 20:10:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:59.078 20:10:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:59.078 20:10:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:59.078 20:10:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:59.078 20:10:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.078 20:10:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:59.078 20:10:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.078 20:10:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.078 20:10:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:59.078 "name": "raid_bdev1", 00:14:59.078 "uuid": "b8af4a61-42f3-44c2-a91f-f2686d5ba1fc", 00:14:59.078 "strip_size_kb": 64, 00:14:59.078 "state": "online", 00:14:59.078 "raid_level": "raid5f", 00:14:59.078 "superblock": true, 00:14:59.078 "num_base_bdevs": 3, 00:14:59.078 "num_base_bdevs_discovered": 3, 00:14:59.078 "num_base_bdevs_operational": 3, 00:14:59.078 "process": { 00:14:59.078 "type": "rebuild", 00:14:59.078 "target": "spare", 00:14:59.078 "progress": { 00:14:59.078 "blocks": 69632, 00:14:59.078 "percent": 54 00:14:59.078 } 00:14:59.078 }, 00:14:59.078 "base_bdevs_list": [ 00:14:59.078 { 00:14:59.078 "name": "spare", 00:14:59.078 "uuid": "d1b66ff0-026f-5083-8c06-084d3b4e3a76", 00:14:59.078 "is_configured": true, 00:14:59.078 "data_offset": 2048, 00:14:59.078 "data_size": 63488 00:14:59.078 }, 00:14:59.078 { 00:14:59.078 "name": "BaseBdev2", 00:14:59.078 "uuid": "c6befa29-ab15-514a-8ea1-3b819317ff18", 00:14:59.078 "is_configured": true, 00:14:59.078 "data_offset": 2048, 00:14:59.078 "data_size": 63488 00:14:59.078 }, 00:14:59.078 { 00:14:59.078 "name": "BaseBdev3", 00:14:59.078 "uuid": "d665584b-4dbe-56fe-8715-80486bd44641", 00:14:59.078 "is_configured": true, 00:14:59.078 "data_offset": 2048, 00:14:59.078 "data_size": 63488 00:14:59.078 } 00:14:59.078 ] 00:14:59.078 }' 00:14:59.078 20:10:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:59.078 20:10:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:59.078 20:10:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:59.078 20:10:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:59.078 20:10:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:00.019 20:10:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:00.019 20:10:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:00.019 20:10:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:00.019 20:10:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:00.019 20:10:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:00.019 20:10:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:00.280 20:10:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.280 20:10:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.280 20:10:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:00.280 20:10:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.280 20:10:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.280 20:10:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:00.280 "name": "raid_bdev1", 00:15:00.280 "uuid": "b8af4a61-42f3-44c2-a91f-f2686d5ba1fc", 00:15:00.280 "strip_size_kb": 64, 00:15:00.280 "state": "online", 00:15:00.280 "raid_level": "raid5f", 00:15:00.280 "superblock": true, 00:15:00.280 "num_base_bdevs": 3, 00:15:00.280 "num_base_bdevs_discovered": 3, 00:15:00.280 "num_base_bdevs_operational": 3, 00:15:00.280 "process": { 00:15:00.280 "type": "rebuild", 00:15:00.280 "target": "spare", 00:15:00.280 "progress": { 00:15:00.280 "blocks": 92160, 00:15:00.280 "percent": 72 00:15:00.280 } 00:15:00.280 }, 00:15:00.280 "base_bdevs_list": [ 00:15:00.280 { 00:15:00.280 "name": "spare", 00:15:00.280 "uuid": "d1b66ff0-026f-5083-8c06-084d3b4e3a76", 00:15:00.280 "is_configured": true, 00:15:00.280 "data_offset": 2048, 00:15:00.280 "data_size": 63488 00:15:00.280 }, 00:15:00.280 { 00:15:00.280 "name": "BaseBdev2", 00:15:00.280 "uuid": "c6befa29-ab15-514a-8ea1-3b819317ff18", 00:15:00.280 "is_configured": true, 00:15:00.280 "data_offset": 2048, 00:15:00.280 "data_size": 63488 00:15:00.280 }, 00:15:00.280 { 00:15:00.280 "name": "BaseBdev3", 00:15:00.280 "uuid": "d665584b-4dbe-56fe-8715-80486bd44641", 00:15:00.280 "is_configured": true, 00:15:00.280 "data_offset": 2048, 00:15:00.280 "data_size": 63488 00:15:00.280 } 00:15:00.280 ] 00:15:00.280 }' 00:15:00.280 20:10:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:00.280 20:10:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:00.280 20:10:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:00.280 20:10:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:00.280 20:10:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:01.221 20:10:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:01.221 20:10:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:01.221 20:10:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:01.221 20:10:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:01.221 20:10:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:01.221 20:10:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:01.221 20:10:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:01.221 20:10:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.221 20:10:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:01.221 20:10:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.221 20:10:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.221 20:10:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:01.221 "name": "raid_bdev1", 00:15:01.221 "uuid": "b8af4a61-42f3-44c2-a91f-f2686d5ba1fc", 00:15:01.221 "strip_size_kb": 64, 00:15:01.221 "state": "online", 00:15:01.221 "raid_level": "raid5f", 00:15:01.221 "superblock": true, 00:15:01.221 "num_base_bdevs": 3, 00:15:01.221 "num_base_bdevs_discovered": 3, 00:15:01.221 "num_base_bdevs_operational": 3, 00:15:01.221 "process": { 00:15:01.221 "type": "rebuild", 00:15:01.221 "target": "spare", 00:15:01.221 "progress": { 00:15:01.221 "blocks": 116736, 00:15:01.221 "percent": 91 00:15:01.221 } 00:15:01.221 }, 00:15:01.221 "base_bdevs_list": [ 00:15:01.221 { 00:15:01.221 "name": "spare", 00:15:01.221 "uuid": "d1b66ff0-026f-5083-8c06-084d3b4e3a76", 00:15:01.221 "is_configured": true, 00:15:01.221 "data_offset": 2048, 00:15:01.221 "data_size": 63488 00:15:01.221 }, 00:15:01.221 { 00:15:01.221 "name": "BaseBdev2", 00:15:01.221 "uuid": "c6befa29-ab15-514a-8ea1-3b819317ff18", 00:15:01.221 "is_configured": true, 00:15:01.221 "data_offset": 2048, 00:15:01.221 "data_size": 63488 00:15:01.221 }, 00:15:01.221 { 00:15:01.221 "name": "BaseBdev3", 00:15:01.221 "uuid": "d665584b-4dbe-56fe-8715-80486bd44641", 00:15:01.221 "is_configured": true, 00:15:01.221 "data_offset": 2048, 00:15:01.221 "data_size": 63488 00:15:01.221 } 00:15:01.221 ] 00:15:01.221 }' 00:15:01.221 20:10:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:01.481 20:10:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:01.481 20:10:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:01.481 20:10:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:01.481 20:10:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:01.741 [2024-12-08 20:10:33.635532] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:01.741 [2024-12-08 20:10:33.635600] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:01.741 [2024-12-08 20:10:33.635700] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:02.311 20:10:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:02.311 20:10:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:02.311 20:10:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:02.311 20:10:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:02.311 20:10:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:02.311 20:10:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:02.311 20:10:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.311 20:10:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:02.311 20:10:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.311 20:10:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.571 20:10:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.571 20:10:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:02.571 "name": "raid_bdev1", 00:15:02.571 "uuid": "b8af4a61-42f3-44c2-a91f-f2686d5ba1fc", 00:15:02.571 "strip_size_kb": 64, 00:15:02.571 "state": "online", 00:15:02.572 "raid_level": "raid5f", 00:15:02.572 "superblock": true, 00:15:02.572 "num_base_bdevs": 3, 00:15:02.572 "num_base_bdevs_discovered": 3, 00:15:02.572 "num_base_bdevs_operational": 3, 00:15:02.572 "base_bdevs_list": [ 00:15:02.572 { 00:15:02.572 "name": "spare", 00:15:02.572 "uuid": "d1b66ff0-026f-5083-8c06-084d3b4e3a76", 00:15:02.572 "is_configured": true, 00:15:02.572 "data_offset": 2048, 00:15:02.572 "data_size": 63488 00:15:02.572 }, 00:15:02.572 { 00:15:02.572 "name": "BaseBdev2", 00:15:02.572 "uuid": "c6befa29-ab15-514a-8ea1-3b819317ff18", 00:15:02.572 "is_configured": true, 00:15:02.572 "data_offset": 2048, 00:15:02.572 "data_size": 63488 00:15:02.572 }, 00:15:02.572 { 00:15:02.572 "name": "BaseBdev3", 00:15:02.572 "uuid": "d665584b-4dbe-56fe-8715-80486bd44641", 00:15:02.572 "is_configured": true, 00:15:02.572 "data_offset": 2048, 00:15:02.572 "data_size": 63488 00:15:02.572 } 00:15:02.572 ] 00:15:02.572 }' 00:15:02.572 20:10:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:02.572 20:10:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:02.572 20:10:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:02.572 20:10:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:02.572 20:10:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:15:02.572 20:10:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:02.572 20:10:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:02.572 20:10:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:02.572 20:10:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:02.572 20:10:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:02.572 20:10:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.572 20:10:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:02.572 20:10:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.572 20:10:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.572 20:10:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.572 20:10:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:02.572 "name": "raid_bdev1", 00:15:02.572 "uuid": "b8af4a61-42f3-44c2-a91f-f2686d5ba1fc", 00:15:02.572 "strip_size_kb": 64, 00:15:02.572 "state": "online", 00:15:02.572 "raid_level": "raid5f", 00:15:02.572 "superblock": true, 00:15:02.572 "num_base_bdevs": 3, 00:15:02.572 "num_base_bdevs_discovered": 3, 00:15:02.572 "num_base_bdevs_operational": 3, 00:15:02.572 "base_bdevs_list": [ 00:15:02.572 { 00:15:02.572 "name": "spare", 00:15:02.572 "uuid": "d1b66ff0-026f-5083-8c06-084d3b4e3a76", 00:15:02.572 "is_configured": true, 00:15:02.572 "data_offset": 2048, 00:15:02.572 "data_size": 63488 00:15:02.572 }, 00:15:02.572 { 00:15:02.572 "name": "BaseBdev2", 00:15:02.572 "uuid": "c6befa29-ab15-514a-8ea1-3b819317ff18", 00:15:02.572 "is_configured": true, 00:15:02.572 "data_offset": 2048, 00:15:02.572 "data_size": 63488 00:15:02.572 }, 00:15:02.572 { 00:15:02.572 "name": "BaseBdev3", 00:15:02.572 "uuid": "d665584b-4dbe-56fe-8715-80486bd44641", 00:15:02.572 "is_configured": true, 00:15:02.572 "data_offset": 2048, 00:15:02.572 "data_size": 63488 00:15:02.572 } 00:15:02.572 ] 00:15:02.572 }' 00:15:02.572 20:10:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:02.572 20:10:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:02.572 20:10:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:02.833 20:10:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:02.833 20:10:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:02.833 20:10:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:02.833 20:10:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:02.833 20:10:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:02.833 20:10:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:02.833 20:10:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:02.833 20:10:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:02.833 20:10:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:02.833 20:10:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:02.833 20:10:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:02.833 20:10:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.833 20:10:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:02.833 20:10:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.833 20:10:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.833 20:10:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.833 20:10:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:02.833 "name": "raid_bdev1", 00:15:02.833 "uuid": "b8af4a61-42f3-44c2-a91f-f2686d5ba1fc", 00:15:02.833 "strip_size_kb": 64, 00:15:02.833 "state": "online", 00:15:02.833 "raid_level": "raid5f", 00:15:02.833 "superblock": true, 00:15:02.833 "num_base_bdevs": 3, 00:15:02.833 "num_base_bdevs_discovered": 3, 00:15:02.833 "num_base_bdevs_operational": 3, 00:15:02.833 "base_bdevs_list": [ 00:15:02.833 { 00:15:02.833 "name": "spare", 00:15:02.833 "uuid": "d1b66ff0-026f-5083-8c06-084d3b4e3a76", 00:15:02.833 "is_configured": true, 00:15:02.833 "data_offset": 2048, 00:15:02.833 "data_size": 63488 00:15:02.833 }, 00:15:02.833 { 00:15:02.833 "name": "BaseBdev2", 00:15:02.833 "uuid": "c6befa29-ab15-514a-8ea1-3b819317ff18", 00:15:02.833 "is_configured": true, 00:15:02.833 "data_offset": 2048, 00:15:02.833 "data_size": 63488 00:15:02.833 }, 00:15:02.833 { 00:15:02.833 "name": "BaseBdev3", 00:15:02.833 "uuid": "d665584b-4dbe-56fe-8715-80486bd44641", 00:15:02.833 "is_configured": true, 00:15:02.833 "data_offset": 2048, 00:15:02.833 "data_size": 63488 00:15:02.833 } 00:15:02.833 ] 00:15:02.833 }' 00:15:02.833 20:10:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:02.833 20:10:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.094 20:10:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:03.094 20:10:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.094 20:10:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.094 [2024-12-08 20:10:35.055292] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:03.094 [2024-12-08 20:10:35.055367] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:03.094 [2024-12-08 20:10:35.055488] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:03.094 [2024-12-08 20:10:35.055653] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:03.094 [2024-12-08 20:10:35.055713] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:03.094 20:10:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.094 20:10:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.094 20:10:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:15:03.095 20:10:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.095 20:10:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.354 20:10:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.354 20:10:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:03.354 20:10:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:03.354 20:10:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:15:03.354 20:10:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:15:03.354 20:10:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:03.354 20:10:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:15:03.355 20:10:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:03.355 20:10:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:03.355 20:10:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:03.355 20:10:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:15:03.355 20:10:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:03.355 20:10:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:03.355 20:10:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:15:03.355 /dev/nbd0 00:15:03.355 20:10:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:03.355 20:10:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:03.355 20:10:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:03.355 20:10:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:15:03.614 20:10:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:03.614 20:10:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:03.614 20:10:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:03.614 20:10:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:15:03.614 20:10:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:03.614 20:10:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:03.614 20:10:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:03.614 1+0 records in 00:15:03.614 1+0 records out 00:15:03.614 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000220502 s, 18.6 MB/s 00:15:03.614 20:10:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:03.614 20:10:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:15:03.614 20:10:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:03.614 20:10:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:03.614 20:10:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:15:03.614 20:10:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:03.614 20:10:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:03.614 20:10:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:15:03.614 /dev/nbd1 00:15:03.614 20:10:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:03.614 20:10:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:03.614 20:10:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:03.614 20:10:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:15:03.614 20:10:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:03.614 20:10:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:03.614 20:10:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:03.614 20:10:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:15:03.614 20:10:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:03.614 20:10:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:03.614 20:10:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:03.614 1+0 records in 00:15:03.614 1+0 records out 00:15:03.615 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000398413 s, 10.3 MB/s 00:15:03.615 20:10:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:03.874 20:10:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:15:03.874 20:10:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:03.874 20:10:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:03.874 20:10:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:15:03.874 20:10:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:03.874 20:10:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:03.874 20:10:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:15:03.874 20:10:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:15:03.874 20:10:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:03.874 20:10:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:03.874 20:10:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:03.874 20:10:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:15:03.874 20:10:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:03.874 20:10:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:04.134 20:10:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:04.134 20:10:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:04.134 20:10:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:04.134 20:10:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:04.134 20:10:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:04.134 20:10:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:04.134 20:10:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:04.134 20:10:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:04.134 20:10:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:04.134 20:10:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:04.411 20:10:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:04.411 20:10:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:04.411 20:10:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:04.411 20:10:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:04.411 20:10:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:04.411 20:10:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:04.411 20:10:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:04.411 20:10:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:04.411 20:10:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:15:04.411 20:10:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:15:04.411 20:10:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.411 20:10:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.411 20:10:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.411 20:10:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:04.411 20:10:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.411 20:10:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.411 [2024-12-08 20:10:36.213109] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:04.411 [2024-12-08 20:10:36.213169] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:04.411 [2024-12-08 20:10:36.213192] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:15:04.411 [2024-12-08 20:10:36.213203] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:04.411 [2024-12-08 20:10:36.215407] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:04.411 [2024-12-08 20:10:36.215490] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:04.411 [2024-12-08 20:10:36.215602] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:04.411 [2024-12-08 20:10:36.215656] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:04.411 [2024-12-08 20:10:36.215791] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:04.411 [2024-12-08 20:10:36.215892] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:04.411 spare 00:15:04.411 20:10:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.411 20:10:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:15:04.411 20:10:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.411 20:10:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.411 [2024-12-08 20:10:36.315809] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:15:04.411 [2024-12-08 20:10:36.315848] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:04.411 [2024-12-08 20:10:36.316125] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047700 00:15:04.411 [2024-12-08 20:10:36.321373] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:15:04.411 [2024-12-08 20:10:36.321429] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:15:04.411 [2024-12-08 20:10:36.321624] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:04.411 20:10:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.411 20:10:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:04.411 20:10:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:04.411 20:10:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:04.411 20:10:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:04.411 20:10:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:04.411 20:10:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:04.411 20:10:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:04.411 20:10:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:04.411 20:10:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:04.411 20:10:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:04.411 20:10:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.411 20:10:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:04.411 20:10:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.411 20:10:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.411 20:10:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.411 20:10:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:04.411 "name": "raid_bdev1", 00:15:04.411 "uuid": "b8af4a61-42f3-44c2-a91f-f2686d5ba1fc", 00:15:04.411 "strip_size_kb": 64, 00:15:04.411 "state": "online", 00:15:04.411 "raid_level": "raid5f", 00:15:04.411 "superblock": true, 00:15:04.411 "num_base_bdevs": 3, 00:15:04.411 "num_base_bdevs_discovered": 3, 00:15:04.412 "num_base_bdevs_operational": 3, 00:15:04.412 "base_bdevs_list": [ 00:15:04.412 { 00:15:04.412 "name": "spare", 00:15:04.412 "uuid": "d1b66ff0-026f-5083-8c06-084d3b4e3a76", 00:15:04.412 "is_configured": true, 00:15:04.412 "data_offset": 2048, 00:15:04.412 "data_size": 63488 00:15:04.412 }, 00:15:04.412 { 00:15:04.412 "name": "BaseBdev2", 00:15:04.412 "uuid": "c6befa29-ab15-514a-8ea1-3b819317ff18", 00:15:04.412 "is_configured": true, 00:15:04.412 "data_offset": 2048, 00:15:04.412 "data_size": 63488 00:15:04.412 }, 00:15:04.412 { 00:15:04.412 "name": "BaseBdev3", 00:15:04.412 "uuid": "d665584b-4dbe-56fe-8715-80486bd44641", 00:15:04.412 "is_configured": true, 00:15:04.412 "data_offset": 2048, 00:15:04.412 "data_size": 63488 00:15:04.412 } 00:15:04.412 ] 00:15:04.412 }' 00:15:04.412 20:10:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:04.412 20:10:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.981 20:10:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:04.981 20:10:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:04.981 20:10:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:04.981 20:10:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:04.981 20:10:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:04.981 20:10:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.981 20:10:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.981 20:10:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.981 20:10:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:04.981 20:10:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.981 20:10:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:04.981 "name": "raid_bdev1", 00:15:04.981 "uuid": "b8af4a61-42f3-44c2-a91f-f2686d5ba1fc", 00:15:04.981 "strip_size_kb": 64, 00:15:04.981 "state": "online", 00:15:04.981 "raid_level": "raid5f", 00:15:04.981 "superblock": true, 00:15:04.981 "num_base_bdevs": 3, 00:15:04.981 "num_base_bdevs_discovered": 3, 00:15:04.981 "num_base_bdevs_operational": 3, 00:15:04.981 "base_bdevs_list": [ 00:15:04.981 { 00:15:04.981 "name": "spare", 00:15:04.981 "uuid": "d1b66ff0-026f-5083-8c06-084d3b4e3a76", 00:15:04.981 "is_configured": true, 00:15:04.981 "data_offset": 2048, 00:15:04.981 "data_size": 63488 00:15:04.981 }, 00:15:04.981 { 00:15:04.981 "name": "BaseBdev2", 00:15:04.981 "uuid": "c6befa29-ab15-514a-8ea1-3b819317ff18", 00:15:04.981 "is_configured": true, 00:15:04.981 "data_offset": 2048, 00:15:04.981 "data_size": 63488 00:15:04.981 }, 00:15:04.981 { 00:15:04.981 "name": "BaseBdev3", 00:15:04.981 "uuid": "d665584b-4dbe-56fe-8715-80486bd44641", 00:15:04.981 "is_configured": true, 00:15:04.981 "data_offset": 2048, 00:15:04.981 "data_size": 63488 00:15:04.981 } 00:15:04.981 ] 00:15:04.981 }' 00:15:04.981 20:10:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:04.981 20:10:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:04.981 20:10:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:04.981 20:10:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:04.981 20:10:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:15:04.981 20:10:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.981 20:10:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.981 20:10:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.241 20:10:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.241 20:10:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:15:05.241 20:10:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:05.241 20:10:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.241 20:10:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.241 [2024-12-08 20:10:36.982842] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:05.241 20:10:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.241 20:10:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:05.241 20:10:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:05.241 20:10:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:05.241 20:10:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:05.241 20:10:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:05.241 20:10:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:05.241 20:10:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:05.241 20:10:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:05.241 20:10:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:05.241 20:10:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:05.241 20:10:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:05.241 20:10:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.241 20:10:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.241 20:10:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:05.241 20:10:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.241 20:10:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:05.241 "name": "raid_bdev1", 00:15:05.241 "uuid": "b8af4a61-42f3-44c2-a91f-f2686d5ba1fc", 00:15:05.241 "strip_size_kb": 64, 00:15:05.241 "state": "online", 00:15:05.241 "raid_level": "raid5f", 00:15:05.241 "superblock": true, 00:15:05.241 "num_base_bdevs": 3, 00:15:05.241 "num_base_bdevs_discovered": 2, 00:15:05.241 "num_base_bdevs_operational": 2, 00:15:05.241 "base_bdevs_list": [ 00:15:05.241 { 00:15:05.241 "name": null, 00:15:05.241 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:05.241 "is_configured": false, 00:15:05.241 "data_offset": 0, 00:15:05.241 "data_size": 63488 00:15:05.241 }, 00:15:05.241 { 00:15:05.241 "name": "BaseBdev2", 00:15:05.241 "uuid": "c6befa29-ab15-514a-8ea1-3b819317ff18", 00:15:05.241 "is_configured": true, 00:15:05.241 "data_offset": 2048, 00:15:05.241 "data_size": 63488 00:15:05.241 }, 00:15:05.241 { 00:15:05.241 "name": "BaseBdev3", 00:15:05.241 "uuid": "d665584b-4dbe-56fe-8715-80486bd44641", 00:15:05.241 "is_configured": true, 00:15:05.241 "data_offset": 2048, 00:15:05.241 "data_size": 63488 00:15:05.241 } 00:15:05.241 ] 00:15:05.241 }' 00:15:05.242 20:10:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:05.242 20:10:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.500 20:10:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:05.500 20:10:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.500 20:10:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.500 [2024-12-08 20:10:37.462064] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:05.500 [2024-12-08 20:10:37.462308] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:05.500 [2024-12-08 20:10:37.462371] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:05.500 [2024-12-08 20:10:37.462466] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:05.760 [2024-12-08 20:10:37.478131] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000477d0 00:15:05.760 20:10:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.760 20:10:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:15:05.760 [2024-12-08 20:10:37.485599] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:06.697 20:10:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:06.697 20:10:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:06.697 20:10:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:06.697 20:10:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:06.697 20:10:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:06.697 20:10:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.697 20:10:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.697 20:10:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:06.697 20:10:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:06.697 20:10:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.697 20:10:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:06.697 "name": "raid_bdev1", 00:15:06.697 "uuid": "b8af4a61-42f3-44c2-a91f-f2686d5ba1fc", 00:15:06.697 "strip_size_kb": 64, 00:15:06.697 "state": "online", 00:15:06.697 "raid_level": "raid5f", 00:15:06.697 "superblock": true, 00:15:06.697 "num_base_bdevs": 3, 00:15:06.697 "num_base_bdevs_discovered": 3, 00:15:06.697 "num_base_bdevs_operational": 3, 00:15:06.697 "process": { 00:15:06.697 "type": "rebuild", 00:15:06.697 "target": "spare", 00:15:06.697 "progress": { 00:15:06.697 "blocks": 20480, 00:15:06.697 "percent": 16 00:15:06.697 } 00:15:06.697 }, 00:15:06.697 "base_bdevs_list": [ 00:15:06.697 { 00:15:06.697 "name": "spare", 00:15:06.697 "uuid": "d1b66ff0-026f-5083-8c06-084d3b4e3a76", 00:15:06.697 "is_configured": true, 00:15:06.697 "data_offset": 2048, 00:15:06.697 "data_size": 63488 00:15:06.697 }, 00:15:06.697 { 00:15:06.697 "name": "BaseBdev2", 00:15:06.697 "uuid": "c6befa29-ab15-514a-8ea1-3b819317ff18", 00:15:06.697 "is_configured": true, 00:15:06.697 "data_offset": 2048, 00:15:06.697 "data_size": 63488 00:15:06.697 }, 00:15:06.697 { 00:15:06.697 "name": "BaseBdev3", 00:15:06.697 "uuid": "d665584b-4dbe-56fe-8715-80486bd44641", 00:15:06.697 "is_configured": true, 00:15:06.697 "data_offset": 2048, 00:15:06.697 "data_size": 63488 00:15:06.697 } 00:15:06.697 ] 00:15:06.697 }' 00:15:06.697 20:10:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:06.697 20:10:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:06.697 20:10:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:06.697 20:10:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:06.697 20:10:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:15:06.697 20:10:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.697 20:10:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:06.697 [2024-12-08 20:10:38.632453] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:06.956 [2024-12-08 20:10:38.693249] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:06.956 [2024-12-08 20:10:38.693303] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:06.957 [2024-12-08 20:10:38.693318] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:06.957 [2024-12-08 20:10:38.693327] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:06.957 20:10:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.957 20:10:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:06.957 20:10:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:06.957 20:10:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:06.957 20:10:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:06.957 20:10:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:06.957 20:10:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:06.957 20:10:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:06.957 20:10:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:06.957 20:10:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:06.957 20:10:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:06.957 20:10:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.957 20:10:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:06.957 20:10:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.957 20:10:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:06.957 20:10:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.957 20:10:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:06.957 "name": "raid_bdev1", 00:15:06.957 "uuid": "b8af4a61-42f3-44c2-a91f-f2686d5ba1fc", 00:15:06.957 "strip_size_kb": 64, 00:15:06.957 "state": "online", 00:15:06.957 "raid_level": "raid5f", 00:15:06.957 "superblock": true, 00:15:06.957 "num_base_bdevs": 3, 00:15:06.957 "num_base_bdevs_discovered": 2, 00:15:06.957 "num_base_bdevs_operational": 2, 00:15:06.957 "base_bdevs_list": [ 00:15:06.957 { 00:15:06.957 "name": null, 00:15:06.957 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:06.957 "is_configured": false, 00:15:06.957 "data_offset": 0, 00:15:06.957 "data_size": 63488 00:15:06.957 }, 00:15:06.957 { 00:15:06.957 "name": "BaseBdev2", 00:15:06.957 "uuid": "c6befa29-ab15-514a-8ea1-3b819317ff18", 00:15:06.957 "is_configured": true, 00:15:06.957 "data_offset": 2048, 00:15:06.957 "data_size": 63488 00:15:06.957 }, 00:15:06.957 { 00:15:06.957 "name": "BaseBdev3", 00:15:06.957 "uuid": "d665584b-4dbe-56fe-8715-80486bd44641", 00:15:06.957 "is_configured": true, 00:15:06.957 "data_offset": 2048, 00:15:06.957 "data_size": 63488 00:15:06.957 } 00:15:06.957 ] 00:15:06.957 }' 00:15:06.957 20:10:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:06.957 20:10:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:07.216 20:10:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:07.474 20:10:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.474 20:10:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:07.474 [2024-12-08 20:10:39.198089] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:07.474 [2024-12-08 20:10:39.198157] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:07.474 [2024-12-08 20:10:39.198180] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:15:07.474 [2024-12-08 20:10:39.198194] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:07.474 [2024-12-08 20:10:39.198728] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:07.474 [2024-12-08 20:10:39.198751] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:07.474 [2024-12-08 20:10:39.198851] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:07.474 [2024-12-08 20:10:39.198869] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:07.474 [2024-12-08 20:10:39.198882] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:07.474 [2024-12-08 20:10:39.198906] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:07.474 [2024-12-08 20:10:39.215018] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000478a0 00:15:07.474 spare 00:15:07.474 20:10:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.474 20:10:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:15:07.474 [2024-12-08 20:10:39.222678] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:08.410 20:10:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:08.410 20:10:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:08.410 20:10:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:08.410 20:10:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:08.410 20:10:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:08.410 20:10:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:08.410 20:10:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:08.410 20:10:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.410 20:10:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:08.410 20:10:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.410 20:10:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:08.410 "name": "raid_bdev1", 00:15:08.410 "uuid": "b8af4a61-42f3-44c2-a91f-f2686d5ba1fc", 00:15:08.410 "strip_size_kb": 64, 00:15:08.410 "state": "online", 00:15:08.410 "raid_level": "raid5f", 00:15:08.410 "superblock": true, 00:15:08.410 "num_base_bdevs": 3, 00:15:08.410 "num_base_bdevs_discovered": 3, 00:15:08.410 "num_base_bdevs_operational": 3, 00:15:08.410 "process": { 00:15:08.410 "type": "rebuild", 00:15:08.410 "target": "spare", 00:15:08.410 "progress": { 00:15:08.410 "blocks": 20480, 00:15:08.410 "percent": 16 00:15:08.410 } 00:15:08.410 }, 00:15:08.410 "base_bdevs_list": [ 00:15:08.410 { 00:15:08.410 "name": "spare", 00:15:08.410 "uuid": "d1b66ff0-026f-5083-8c06-084d3b4e3a76", 00:15:08.410 "is_configured": true, 00:15:08.410 "data_offset": 2048, 00:15:08.410 "data_size": 63488 00:15:08.410 }, 00:15:08.410 { 00:15:08.410 "name": "BaseBdev2", 00:15:08.410 "uuid": "c6befa29-ab15-514a-8ea1-3b819317ff18", 00:15:08.410 "is_configured": true, 00:15:08.410 "data_offset": 2048, 00:15:08.410 "data_size": 63488 00:15:08.410 }, 00:15:08.410 { 00:15:08.410 "name": "BaseBdev3", 00:15:08.410 "uuid": "d665584b-4dbe-56fe-8715-80486bd44641", 00:15:08.410 "is_configured": true, 00:15:08.410 "data_offset": 2048, 00:15:08.410 "data_size": 63488 00:15:08.410 } 00:15:08.410 ] 00:15:08.410 }' 00:15:08.410 20:10:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:08.410 20:10:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:08.410 20:10:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:08.410 20:10:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:08.410 20:10:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:15:08.410 20:10:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.410 20:10:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:08.410 [2024-12-08 20:10:40.357396] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:08.670 [2024-12-08 20:10:40.430307] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:08.670 [2024-12-08 20:10:40.430406] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:08.670 [2024-12-08 20:10:40.430460] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:08.670 [2024-12-08 20:10:40.430480] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:08.670 20:10:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.670 20:10:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:08.670 20:10:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:08.670 20:10:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:08.670 20:10:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:08.670 20:10:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:08.670 20:10:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:08.670 20:10:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:08.670 20:10:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:08.670 20:10:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:08.670 20:10:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:08.670 20:10:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:08.670 20:10:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:08.670 20:10:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.670 20:10:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:08.670 20:10:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.670 20:10:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:08.670 "name": "raid_bdev1", 00:15:08.670 "uuid": "b8af4a61-42f3-44c2-a91f-f2686d5ba1fc", 00:15:08.670 "strip_size_kb": 64, 00:15:08.670 "state": "online", 00:15:08.670 "raid_level": "raid5f", 00:15:08.670 "superblock": true, 00:15:08.670 "num_base_bdevs": 3, 00:15:08.670 "num_base_bdevs_discovered": 2, 00:15:08.670 "num_base_bdevs_operational": 2, 00:15:08.670 "base_bdevs_list": [ 00:15:08.670 { 00:15:08.670 "name": null, 00:15:08.670 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:08.670 "is_configured": false, 00:15:08.670 "data_offset": 0, 00:15:08.670 "data_size": 63488 00:15:08.670 }, 00:15:08.670 { 00:15:08.670 "name": "BaseBdev2", 00:15:08.670 "uuid": "c6befa29-ab15-514a-8ea1-3b819317ff18", 00:15:08.670 "is_configured": true, 00:15:08.670 "data_offset": 2048, 00:15:08.670 "data_size": 63488 00:15:08.670 }, 00:15:08.670 { 00:15:08.670 "name": "BaseBdev3", 00:15:08.670 "uuid": "d665584b-4dbe-56fe-8715-80486bd44641", 00:15:08.670 "is_configured": true, 00:15:08.670 "data_offset": 2048, 00:15:08.670 "data_size": 63488 00:15:08.670 } 00:15:08.670 ] 00:15:08.670 }' 00:15:08.670 20:10:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:08.670 20:10:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:08.930 20:10:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:08.930 20:10:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:08.930 20:10:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:08.930 20:10:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:08.930 20:10:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:08.930 20:10:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:08.930 20:10:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:08.930 20:10:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.930 20:10:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:08.930 20:10:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.930 20:10:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:08.930 "name": "raid_bdev1", 00:15:08.930 "uuid": "b8af4a61-42f3-44c2-a91f-f2686d5ba1fc", 00:15:08.930 "strip_size_kb": 64, 00:15:08.930 "state": "online", 00:15:08.930 "raid_level": "raid5f", 00:15:08.930 "superblock": true, 00:15:08.930 "num_base_bdevs": 3, 00:15:08.930 "num_base_bdevs_discovered": 2, 00:15:08.930 "num_base_bdevs_operational": 2, 00:15:08.930 "base_bdevs_list": [ 00:15:08.930 { 00:15:08.930 "name": null, 00:15:08.930 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:08.930 "is_configured": false, 00:15:08.930 "data_offset": 0, 00:15:08.930 "data_size": 63488 00:15:08.930 }, 00:15:08.930 { 00:15:08.930 "name": "BaseBdev2", 00:15:08.930 "uuid": "c6befa29-ab15-514a-8ea1-3b819317ff18", 00:15:08.930 "is_configured": true, 00:15:08.930 "data_offset": 2048, 00:15:08.930 "data_size": 63488 00:15:08.930 }, 00:15:08.930 { 00:15:08.930 "name": "BaseBdev3", 00:15:08.930 "uuid": "d665584b-4dbe-56fe-8715-80486bd44641", 00:15:08.930 "is_configured": true, 00:15:08.930 "data_offset": 2048, 00:15:08.930 "data_size": 63488 00:15:08.930 } 00:15:08.930 ] 00:15:08.930 }' 00:15:08.930 20:10:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:09.190 20:10:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:09.190 20:10:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:09.190 20:10:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:09.190 20:10:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:15:09.190 20:10:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.190 20:10:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.190 20:10:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.190 20:10:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:09.190 20:10:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.190 20:10:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.190 [2024-12-08 20:10:41.016412] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:09.190 [2024-12-08 20:10:41.016468] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:09.190 [2024-12-08 20:10:41.016496] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:15:09.190 [2024-12-08 20:10:41.016508] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:09.190 [2024-12-08 20:10:41.017056] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:09.190 [2024-12-08 20:10:41.017079] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:09.190 [2024-12-08 20:10:41.017175] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:15:09.190 [2024-12-08 20:10:41.017192] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:09.190 [2024-12-08 20:10:41.017218] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:09.190 [2024-12-08 20:10:41.017229] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:15:09.190 BaseBdev1 00:15:09.190 20:10:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.190 20:10:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:15:10.130 20:10:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:10.130 20:10:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:10.130 20:10:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:10.130 20:10:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:10.130 20:10:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:10.130 20:10:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:10.130 20:10:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:10.130 20:10:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:10.130 20:10:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:10.131 20:10:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:10.131 20:10:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:10.131 20:10:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:10.131 20:10:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.131 20:10:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:10.131 20:10:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.131 20:10:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:10.131 "name": "raid_bdev1", 00:15:10.131 "uuid": "b8af4a61-42f3-44c2-a91f-f2686d5ba1fc", 00:15:10.131 "strip_size_kb": 64, 00:15:10.131 "state": "online", 00:15:10.131 "raid_level": "raid5f", 00:15:10.131 "superblock": true, 00:15:10.131 "num_base_bdevs": 3, 00:15:10.131 "num_base_bdevs_discovered": 2, 00:15:10.131 "num_base_bdevs_operational": 2, 00:15:10.131 "base_bdevs_list": [ 00:15:10.131 { 00:15:10.131 "name": null, 00:15:10.131 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:10.131 "is_configured": false, 00:15:10.131 "data_offset": 0, 00:15:10.131 "data_size": 63488 00:15:10.131 }, 00:15:10.131 { 00:15:10.131 "name": "BaseBdev2", 00:15:10.131 "uuid": "c6befa29-ab15-514a-8ea1-3b819317ff18", 00:15:10.131 "is_configured": true, 00:15:10.131 "data_offset": 2048, 00:15:10.131 "data_size": 63488 00:15:10.131 }, 00:15:10.131 { 00:15:10.131 "name": "BaseBdev3", 00:15:10.131 "uuid": "d665584b-4dbe-56fe-8715-80486bd44641", 00:15:10.131 "is_configured": true, 00:15:10.131 "data_offset": 2048, 00:15:10.131 "data_size": 63488 00:15:10.131 } 00:15:10.131 ] 00:15:10.131 }' 00:15:10.131 20:10:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:10.131 20:10:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:10.700 20:10:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:10.700 20:10:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:10.700 20:10:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:10.700 20:10:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:10.700 20:10:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:10.700 20:10:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:10.700 20:10:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:10.700 20:10:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.700 20:10:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:10.700 20:10:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.700 20:10:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:10.700 "name": "raid_bdev1", 00:15:10.700 "uuid": "b8af4a61-42f3-44c2-a91f-f2686d5ba1fc", 00:15:10.700 "strip_size_kb": 64, 00:15:10.700 "state": "online", 00:15:10.700 "raid_level": "raid5f", 00:15:10.700 "superblock": true, 00:15:10.700 "num_base_bdevs": 3, 00:15:10.700 "num_base_bdevs_discovered": 2, 00:15:10.700 "num_base_bdevs_operational": 2, 00:15:10.700 "base_bdevs_list": [ 00:15:10.700 { 00:15:10.700 "name": null, 00:15:10.700 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:10.700 "is_configured": false, 00:15:10.700 "data_offset": 0, 00:15:10.700 "data_size": 63488 00:15:10.700 }, 00:15:10.700 { 00:15:10.700 "name": "BaseBdev2", 00:15:10.700 "uuid": "c6befa29-ab15-514a-8ea1-3b819317ff18", 00:15:10.700 "is_configured": true, 00:15:10.700 "data_offset": 2048, 00:15:10.700 "data_size": 63488 00:15:10.700 }, 00:15:10.700 { 00:15:10.700 "name": "BaseBdev3", 00:15:10.701 "uuid": "d665584b-4dbe-56fe-8715-80486bd44641", 00:15:10.701 "is_configured": true, 00:15:10.701 "data_offset": 2048, 00:15:10.701 "data_size": 63488 00:15:10.701 } 00:15:10.701 ] 00:15:10.701 }' 00:15:10.701 20:10:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:10.701 20:10:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:10.701 20:10:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:10.701 20:10:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:10.701 20:10:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:10.701 20:10:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:15:10.701 20:10:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:10.701 20:10:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:15:10.701 20:10:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:10.701 20:10:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:15:10.701 20:10:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:10.701 20:10:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:10.701 20:10:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.701 20:10:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:10.701 [2024-12-08 20:10:42.601908] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:10.701 [2024-12-08 20:10:42.602090] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:10.701 [2024-12-08 20:10:42.602106] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:10.701 request: 00:15:10.701 { 00:15:10.701 "base_bdev": "BaseBdev1", 00:15:10.701 "raid_bdev": "raid_bdev1", 00:15:10.701 "method": "bdev_raid_add_base_bdev", 00:15:10.701 "req_id": 1 00:15:10.701 } 00:15:10.701 Got JSON-RPC error response 00:15:10.701 response: 00:15:10.701 { 00:15:10.701 "code": -22, 00:15:10.701 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:15:10.701 } 00:15:10.701 20:10:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:10.701 20:10:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:15:10.701 20:10:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:10.701 20:10:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:10.701 20:10:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:10.701 20:10:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:15:11.693 20:10:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:11.693 20:10:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:11.693 20:10:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:11.693 20:10:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:11.693 20:10:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:11.693 20:10:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:11.693 20:10:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:11.693 20:10:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:11.693 20:10:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:11.693 20:10:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:11.693 20:10:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.693 20:10:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:11.693 20:10:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.693 20:10:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:11.693 20:10:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.969 20:10:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:11.969 "name": "raid_bdev1", 00:15:11.969 "uuid": "b8af4a61-42f3-44c2-a91f-f2686d5ba1fc", 00:15:11.969 "strip_size_kb": 64, 00:15:11.969 "state": "online", 00:15:11.969 "raid_level": "raid5f", 00:15:11.969 "superblock": true, 00:15:11.969 "num_base_bdevs": 3, 00:15:11.969 "num_base_bdevs_discovered": 2, 00:15:11.969 "num_base_bdevs_operational": 2, 00:15:11.969 "base_bdevs_list": [ 00:15:11.969 { 00:15:11.969 "name": null, 00:15:11.969 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:11.969 "is_configured": false, 00:15:11.969 "data_offset": 0, 00:15:11.969 "data_size": 63488 00:15:11.969 }, 00:15:11.969 { 00:15:11.969 "name": "BaseBdev2", 00:15:11.969 "uuid": "c6befa29-ab15-514a-8ea1-3b819317ff18", 00:15:11.969 "is_configured": true, 00:15:11.969 "data_offset": 2048, 00:15:11.969 "data_size": 63488 00:15:11.969 }, 00:15:11.969 { 00:15:11.969 "name": "BaseBdev3", 00:15:11.969 "uuid": "d665584b-4dbe-56fe-8715-80486bd44641", 00:15:11.969 "is_configured": true, 00:15:11.969 "data_offset": 2048, 00:15:11.969 "data_size": 63488 00:15:11.969 } 00:15:11.969 ] 00:15:11.969 }' 00:15:11.969 20:10:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:11.969 20:10:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:12.228 20:10:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:12.228 20:10:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:12.228 20:10:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:12.228 20:10:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:12.228 20:10:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:12.228 20:10:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:12.228 20:10:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.228 20:10:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:12.228 20:10:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:12.228 20:10:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.228 20:10:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:12.228 "name": "raid_bdev1", 00:15:12.228 "uuid": "b8af4a61-42f3-44c2-a91f-f2686d5ba1fc", 00:15:12.228 "strip_size_kb": 64, 00:15:12.228 "state": "online", 00:15:12.228 "raid_level": "raid5f", 00:15:12.228 "superblock": true, 00:15:12.228 "num_base_bdevs": 3, 00:15:12.228 "num_base_bdevs_discovered": 2, 00:15:12.228 "num_base_bdevs_operational": 2, 00:15:12.228 "base_bdevs_list": [ 00:15:12.228 { 00:15:12.228 "name": null, 00:15:12.228 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:12.228 "is_configured": false, 00:15:12.228 "data_offset": 0, 00:15:12.228 "data_size": 63488 00:15:12.228 }, 00:15:12.228 { 00:15:12.228 "name": "BaseBdev2", 00:15:12.228 "uuid": "c6befa29-ab15-514a-8ea1-3b819317ff18", 00:15:12.228 "is_configured": true, 00:15:12.228 "data_offset": 2048, 00:15:12.228 "data_size": 63488 00:15:12.228 }, 00:15:12.228 { 00:15:12.228 "name": "BaseBdev3", 00:15:12.228 "uuid": "d665584b-4dbe-56fe-8715-80486bd44641", 00:15:12.228 "is_configured": true, 00:15:12.228 "data_offset": 2048, 00:15:12.228 "data_size": 63488 00:15:12.228 } 00:15:12.228 ] 00:15:12.228 }' 00:15:12.229 20:10:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:12.229 20:10:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:12.229 20:10:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:12.229 20:10:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:12.229 20:10:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 81705 00:15:12.229 20:10:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 81705 ']' 00:15:12.229 20:10:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 81705 00:15:12.488 20:10:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:15:12.489 20:10:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:12.489 20:10:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81705 00:15:12.489 killing process with pid 81705 00:15:12.489 Received shutdown signal, test time was about 60.000000 seconds 00:15:12.489 00:15:12.489 Latency(us) 00:15:12.489 [2024-12-08T20:10:44.467Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:12.489 [2024-12-08T20:10:44.467Z] =================================================================================================================== 00:15:12.489 [2024-12-08T20:10:44.467Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:12.489 20:10:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:12.489 20:10:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:12.489 20:10:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81705' 00:15:12.489 20:10:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 81705 00:15:12.489 [2024-12-08 20:10:44.227631] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:12.489 [2024-12-08 20:10:44.227750] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:12.489 20:10:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 81705 00:15:12.489 [2024-12-08 20:10:44.227812] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:12.489 [2024-12-08 20:10:44.227823] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:15:12.749 [2024-12-08 20:10:44.595195] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:13.688 20:10:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:15:13.688 00:15:13.688 real 0m23.125s 00:15:13.688 user 0m29.705s 00:15:13.688 sys 0m2.670s 00:15:13.688 20:10:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:13.688 20:10:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:13.688 ************************************ 00:15:13.688 END TEST raid5f_rebuild_test_sb 00:15:13.688 ************************************ 00:15:13.948 20:10:45 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:15:13.948 20:10:45 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:15:13.948 20:10:45 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:13.948 20:10:45 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:13.948 20:10:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:13.948 ************************************ 00:15:13.948 START TEST raid5f_state_function_test 00:15:13.948 ************************************ 00:15:13.948 20:10:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 4 false 00:15:13.948 20:10:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:15:13.948 20:10:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:15:13.948 20:10:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:15:13.948 20:10:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:13.948 20:10:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:13.948 20:10:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:13.948 20:10:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:13.948 20:10:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:13.948 20:10:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:13.948 20:10:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:13.948 20:10:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:13.948 20:10:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:13.948 20:10:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:15:13.948 20:10:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:13.948 20:10:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:13.948 20:10:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:15:13.948 20:10:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:13.948 20:10:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:13.948 20:10:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:13.948 20:10:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:13.948 20:10:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:13.948 20:10:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:13.948 20:10:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:13.948 20:10:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:13.948 20:10:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:15:13.948 20:10:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:15:13.948 20:10:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:15:13.948 20:10:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:15:13.948 20:10:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:15:13.948 20:10:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=82447 00:15:13.948 20:10:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:13.948 20:10:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 82447' 00:15:13.948 Process raid pid: 82447 00:15:13.948 20:10:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 82447 00:15:13.948 20:10:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 82447 ']' 00:15:13.948 20:10:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:13.948 20:10:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:13.948 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:13.948 20:10:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:13.948 20:10:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:13.948 20:10:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.948 [2024-12-08 20:10:45.814165] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:15:13.948 [2024-12-08 20:10:45.814273] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:14.207 [2024-12-08 20:10:45.985943] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:14.207 [2024-12-08 20:10:46.092687] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:14.467 [2024-12-08 20:10:46.290514] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:14.467 [2024-12-08 20:10:46.290553] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:14.727 20:10:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:14.727 20:10:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:15:14.727 20:10:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:14.727 20:10:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.727 20:10:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.727 [2024-12-08 20:10:46.640400] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:14.727 [2024-12-08 20:10:46.640452] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:14.727 [2024-12-08 20:10:46.640461] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:14.727 [2024-12-08 20:10:46.640470] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:14.727 [2024-12-08 20:10:46.640477] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:14.727 [2024-12-08 20:10:46.640485] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:14.727 [2024-12-08 20:10:46.640491] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:14.727 [2024-12-08 20:10:46.640499] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:14.727 20:10:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.727 20:10:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:14.727 20:10:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:14.727 20:10:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:14.727 20:10:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:14.727 20:10:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:14.727 20:10:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:14.727 20:10:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:14.727 20:10:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:14.727 20:10:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:14.727 20:10:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:14.727 20:10:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:14.727 20:10:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:14.727 20:10:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.727 20:10:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.727 20:10:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.727 20:10:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:14.727 "name": "Existed_Raid", 00:15:14.727 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:14.727 "strip_size_kb": 64, 00:15:14.727 "state": "configuring", 00:15:14.727 "raid_level": "raid5f", 00:15:14.727 "superblock": false, 00:15:14.727 "num_base_bdevs": 4, 00:15:14.727 "num_base_bdevs_discovered": 0, 00:15:14.727 "num_base_bdevs_operational": 4, 00:15:14.727 "base_bdevs_list": [ 00:15:14.727 { 00:15:14.727 "name": "BaseBdev1", 00:15:14.727 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:14.727 "is_configured": false, 00:15:14.727 "data_offset": 0, 00:15:14.727 "data_size": 0 00:15:14.728 }, 00:15:14.728 { 00:15:14.728 "name": "BaseBdev2", 00:15:14.728 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:14.728 "is_configured": false, 00:15:14.728 "data_offset": 0, 00:15:14.728 "data_size": 0 00:15:14.728 }, 00:15:14.728 { 00:15:14.728 "name": "BaseBdev3", 00:15:14.728 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:14.728 "is_configured": false, 00:15:14.728 "data_offset": 0, 00:15:14.728 "data_size": 0 00:15:14.728 }, 00:15:14.728 { 00:15:14.728 "name": "BaseBdev4", 00:15:14.728 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:14.728 "is_configured": false, 00:15:14.728 "data_offset": 0, 00:15:14.728 "data_size": 0 00:15:14.728 } 00:15:14.728 ] 00:15:14.728 }' 00:15:14.728 20:10:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:14.728 20:10:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.298 20:10:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:15.298 20:10:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.298 20:10:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.298 [2024-12-08 20:10:47.107531] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:15.298 [2024-12-08 20:10:47.107570] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:15:15.298 20:10:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.298 20:10:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:15.298 20:10:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.298 20:10:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.298 [2024-12-08 20:10:47.119526] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:15.298 [2024-12-08 20:10:47.119561] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:15.298 [2024-12-08 20:10:47.119570] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:15.298 [2024-12-08 20:10:47.119579] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:15.298 [2024-12-08 20:10:47.119585] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:15.298 [2024-12-08 20:10:47.119593] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:15.298 [2024-12-08 20:10:47.119599] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:15.298 [2024-12-08 20:10:47.119607] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:15.298 20:10:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.298 20:10:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:15.298 20:10:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.298 20:10:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.298 [2024-12-08 20:10:47.167465] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:15.298 BaseBdev1 00:15:15.298 20:10:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.298 20:10:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:15.298 20:10:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:15.298 20:10:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:15.298 20:10:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:15.298 20:10:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:15.298 20:10:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:15.298 20:10:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:15.298 20:10:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.298 20:10:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.298 20:10:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.298 20:10:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:15.299 20:10:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.299 20:10:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.299 [ 00:15:15.299 { 00:15:15.299 "name": "BaseBdev1", 00:15:15.299 "aliases": [ 00:15:15.299 "364066df-adea-4728-814d-aa8793133386" 00:15:15.299 ], 00:15:15.299 "product_name": "Malloc disk", 00:15:15.299 "block_size": 512, 00:15:15.299 "num_blocks": 65536, 00:15:15.299 "uuid": "364066df-adea-4728-814d-aa8793133386", 00:15:15.299 "assigned_rate_limits": { 00:15:15.299 "rw_ios_per_sec": 0, 00:15:15.299 "rw_mbytes_per_sec": 0, 00:15:15.299 "r_mbytes_per_sec": 0, 00:15:15.299 "w_mbytes_per_sec": 0 00:15:15.299 }, 00:15:15.299 "claimed": true, 00:15:15.299 "claim_type": "exclusive_write", 00:15:15.299 "zoned": false, 00:15:15.299 "supported_io_types": { 00:15:15.299 "read": true, 00:15:15.299 "write": true, 00:15:15.299 "unmap": true, 00:15:15.299 "flush": true, 00:15:15.299 "reset": true, 00:15:15.299 "nvme_admin": false, 00:15:15.299 "nvme_io": false, 00:15:15.299 "nvme_io_md": false, 00:15:15.299 "write_zeroes": true, 00:15:15.299 "zcopy": true, 00:15:15.299 "get_zone_info": false, 00:15:15.299 "zone_management": false, 00:15:15.299 "zone_append": false, 00:15:15.299 "compare": false, 00:15:15.299 "compare_and_write": false, 00:15:15.299 "abort": true, 00:15:15.299 "seek_hole": false, 00:15:15.299 "seek_data": false, 00:15:15.299 "copy": true, 00:15:15.299 "nvme_iov_md": false 00:15:15.299 }, 00:15:15.299 "memory_domains": [ 00:15:15.299 { 00:15:15.299 "dma_device_id": "system", 00:15:15.299 "dma_device_type": 1 00:15:15.299 }, 00:15:15.299 { 00:15:15.299 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:15.299 "dma_device_type": 2 00:15:15.299 } 00:15:15.299 ], 00:15:15.299 "driver_specific": {} 00:15:15.299 } 00:15:15.299 ] 00:15:15.299 20:10:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.299 20:10:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:15.299 20:10:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:15.299 20:10:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:15.299 20:10:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:15.299 20:10:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:15.299 20:10:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:15.299 20:10:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:15.299 20:10:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:15.299 20:10:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:15.299 20:10:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:15.299 20:10:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:15.299 20:10:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:15.299 20:10:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:15.299 20:10:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.299 20:10:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.299 20:10:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.299 20:10:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:15.299 "name": "Existed_Raid", 00:15:15.299 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:15.299 "strip_size_kb": 64, 00:15:15.299 "state": "configuring", 00:15:15.299 "raid_level": "raid5f", 00:15:15.299 "superblock": false, 00:15:15.299 "num_base_bdevs": 4, 00:15:15.299 "num_base_bdevs_discovered": 1, 00:15:15.299 "num_base_bdevs_operational": 4, 00:15:15.299 "base_bdevs_list": [ 00:15:15.299 { 00:15:15.299 "name": "BaseBdev1", 00:15:15.299 "uuid": "364066df-adea-4728-814d-aa8793133386", 00:15:15.299 "is_configured": true, 00:15:15.299 "data_offset": 0, 00:15:15.299 "data_size": 65536 00:15:15.299 }, 00:15:15.299 { 00:15:15.299 "name": "BaseBdev2", 00:15:15.299 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:15.299 "is_configured": false, 00:15:15.299 "data_offset": 0, 00:15:15.299 "data_size": 0 00:15:15.299 }, 00:15:15.299 { 00:15:15.299 "name": "BaseBdev3", 00:15:15.299 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:15.299 "is_configured": false, 00:15:15.299 "data_offset": 0, 00:15:15.299 "data_size": 0 00:15:15.299 }, 00:15:15.299 { 00:15:15.299 "name": "BaseBdev4", 00:15:15.299 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:15.299 "is_configured": false, 00:15:15.299 "data_offset": 0, 00:15:15.299 "data_size": 0 00:15:15.299 } 00:15:15.299 ] 00:15:15.299 }' 00:15:15.299 20:10:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:15.299 20:10:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.868 20:10:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:15.868 20:10:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.868 20:10:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.868 [2024-12-08 20:10:47.594714] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:15.868 [2024-12-08 20:10:47.594760] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:15:15.868 20:10:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.868 20:10:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:15.868 20:10:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.868 20:10:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.868 [2024-12-08 20:10:47.606751] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:15.868 [2024-12-08 20:10:47.608558] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:15.868 [2024-12-08 20:10:47.608592] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:15.868 [2024-12-08 20:10:47.608617] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:15.868 [2024-12-08 20:10:47.608626] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:15.868 [2024-12-08 20:10:47.608633] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:15.868 [2024-12-08 20:10:47.608641] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:15.868 20:10:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.868 20:10:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:15.868 20:10:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:15.868 20:10:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:15.868 20:10:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:15.868 20:10:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:15.869 20:10:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:15.869 20:10:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:15.869 20:10:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:15.869 20:10:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:15.869 20:10:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:15.869 20:10:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:15.869 20:10:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:15.869 20:10:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:15.869 20:10:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.869 20:10:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:15.869 20:10:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.869 20:10:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.869 20:10:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:15.869 "name": "Existed_Raid", 00:15:15.869 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:15.869 "strip_size_kb": 64, 00:15:15.869 "state": "configuring", 00:15:15.869 "raid_level": "raid5f", 00:15:15.869 "superblock": false, 00:15:15.869 "num_base_bdevs": 4, 00:15:15.869 "num_base_bdevs_discovered": 1, 00:15:15.869 "num_base_bdevs_operational": 4, 00:15:15.869 "base_bdevs_list": [ 00:15:15.869 { 00:15:15.869 "name": "BaseBdev1", 00:15:15.869 "uuid": "364066df-adea-4728-814d-aa8793133386", 00:15:15.869 "is_configured": true, 00:15:15.869 "data_offset": 0, 00:15:15.869 "data_size": 65536 00:15:15.869 }, 00:15:15.869 { 00:15:15.869 "name": "BaseBdev2", 00:15:15.869 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:15.869 "is_configured": false, 00:15:15.869 "data_offset": 0, 00:15:15.869 "data_size": 0 00:15:15.869 }, 00:15:15.869 { 00:15:15.869 "name": "BaseBdev3", 00:15:15.869 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:15.869 "is_configured": false, 00:15:15.869 "data_offset": 0, 00:15:15.869 "data_size": 0 00:15:15.869 }, 00:15:15.869 { 00:15:15.869 "name": "BaseBdev4", 00:15:15.869 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:15.869 "is_configured": false, 00:15:15.869 "data_offset": 0, 00:15:15.869 "data_size": 0 00:15:15.869 } 00:15:15.869 ] 00:15:15.869 }' 00:15:15.869 20:10:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:15.869 20:10:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.129 20:10:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:16.129 20:10:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.129 20:10:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.129 [2024-12-08 20:10:48.007818] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:16.129 BaseBdev2 00:15:16.129 20:10:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.129 20:10:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:16.129 20:10:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:16.129 20:10:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:16.129 20:10:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:16.129 20:10:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:16.129 20:10:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:16.129 20:10:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:16.129 20:10:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.129 20:10:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.129 20:10:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.129 20:10:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:16.129 20:10:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.129 20:10:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.129 [ 00:15:16.129 { 00:15:16.129 "name": "BaseBdev2", 00:15:16.129 "aliases": [ 00:15:16.129 "bf518791-b8b7-4ea4-b0aa-556deffa7250" 00:15:16.129 ], 00:15:16.129 "product_name": "Malloc disk", 00:15:16.129 "block_size": 512, 00:15:16.129 "num_blocks": 65536, 00:15:16.129 "uuid": "bf518791-b8b7-4ea4-b0aa-556deffa7250", 00:15:16.129 "assigned_rate_limits": { 00:15:16.129 "rw_ios_per_sec": 0, 00:15:16.129 "rw_mbytes_per_sec": 0, 00:15:16.129 "r_mbytes_per_sec": 0, 00:15:16.129 "w_mbytes_per_sec": 0 00:15:16.129 }, 00:15:16.129 "claimed": true, 00:15:16.129 "claim_type": "exclusive_write", 00:15:16.129 "zoned": false, 00:15:16.129 "supported_io_types": { 00:15:16.129 "read": true, 00:15:16.129 "write": true, 00:15:16.129 "unmap": true, 00:15:16.129 "flush": true, 00:15:16.129 "reset": true, 00:15:16.129 "nvme_admin": false, 00:15:16.129 "nvme_io": false, 00:15:16.129 "nvme_io_md": false, 00:15:16.129 "write_zeroes": true, 00:15:16.129 "zcopy": true, 00:15:16.129 "get_zone_info": false, 00:15:16.129 "zone_management": false, 00:15:16.129 "zone_append": false, 00:15:16.129 "compare": false, 00:15:16.129 "compare_and_write": false, 00:15:16.129 "abort": true, 00:15:16.129 "seek_hole": false, 00:15:16.129 "seek_data": false, 00:15:16.129 "copy": true, 00:15:16.129 "nvme_iov_md": false 00:15:16.129 }, 00:15:16.129 "memory_domains": [ 00:15:16.129 { 00:15:16.129 "dma_device_id": "system", 00:15:16.129 "dma_device_type": 1 00:15:16.130 }, 00:15:16.130 { 00:15:16.130 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:16.130 "dma_device_type": 2 00:15:16.130 } 00:15:16.130 ], 00:15:16.130 "driver_specific": {} 00:15:16.130 } 00:15:16.130 ] 00:15:16.130 20:10:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.130 20:10:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:16.130 20:10:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:16.130 20:10:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:16.130 20:10:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:16.130 20:10:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:16.130 20:10:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:16.130 20:10:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:16.130 20:10:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:16.130 20:10:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:16.130 20:10:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:16.130 20:10:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:16.130 20:10:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:16.130 20:10:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:16.130 20:10:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:16.130 20:10:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.130 20:10:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.130 20:10:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:16.130 20:10:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.130 20:10:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:16.130 "name": "Existed_Raid", 00:15:16.130 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:16.130 "strip_size_kb": 64, 00:15:16.130 "state": "configuring", 00:15:16.130 "raid_level": "raid5f", 00:15:16.130 "superblock": false, 00:15:16.130 "num_base_bdevs": 4, 00:15:16.130 "num_base_bdevs_discovered": 2, 00:15:16.130 "num_base_bdevs_operational": 4, 00:15:16.130 "base_bdevs_list": [ 00:15:16.130 { 00:15:16.130 "name": "BaseBdev1", 00:15:16.130 "uuid": "364066df-adea-4728-814d-aa8793133386", 00:15:16.130 "is_configured": true, 00:15:16.130 "data_offset": 0, 00:15:16.130 "data_size": 65536 00:15:16.130 }, 00:15:16.130 { 00:15:16.130 "name": "BaseBdev2", 00:15:16.130 "uuid": "bf518791-b8b7-4ea4-b0aa-556deffa7250", 00:15:16.130 "is_configured": true, 00:15:16.130 "data_offset": 0, 00:15:16.130 "data_size": 65536 00:15:16.130 }, 00:15:16.130 { 00:15:16.130 "name": "BaseBdev3", 00:15:16.130 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:16.130 "is_configured": false, 00:15:16.130 "data_offset": 0, 00:15:16.130 "data_size": 0 00:15:16.130 }, 00:15:16.130 { 00:15:16.130 "name": "BaseBdev4", 00:15:16.130 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:16.130 "is_configured": false, 00:15:16.130 "data_offset": 0, 00:15:16.130 "data_size": 0 00:15:16.130 } 00:15:16.130 ] 00:15:16.130 }' 00:15:16.130 20:10:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:16.130 20:10:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.701 20:10:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:16.701 20:10:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.701 20:10:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.701 [2024-12-08 20:10:48.528650] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:16.701 BaseBdev3 00:15:16.701 20:10:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.701 20:10:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:15:16.701 20:10:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:16.701 20:10:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:16.701 20:10:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:16.701 20:10:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:16.701 20:10:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:16.701 20:10:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:16.701 20:10:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.701 20:10:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.701 20:10:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.701 20:10:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:16.701 20:10:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.701 20:10:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.701 [ 00:15:16.701 { 00:15:16.701 "name": "BaseBdev3", 00:15:16.701 "aliases": [ 00:15:16.701 "7eaf38e6-ee39-4298-a559-e97a42957910" 00:15:16.701 ], 00:15:16.701 "product_name": "Malloc disk", 00:15:16.701 "block_size": 512, 00:15:16.701 "num_blocks": 65536, 00:15:16.701 "uuid": "7eaf38e6-ee39-4298-a559-e97a42957910", 00:15:16.701 "assigned_rate_limits": { 00:15:16.701 "rw_ios_per_sec": 0, 00:15:16.701 "rw_mbytes_per_sec": 0, 00:15:16.701 "r_mbytes_per_sec": 0, 00:15:16.701 "w_mbytes_per_sec": 0 00:15:16.701 }, 00:15:16.701 "claimed": true, 00:15:16.701 "claim_type": "exclusive_write", 00:15:16.701 "zoned": false, 00:15:16.701 "supported_io_types": { 00:15:16.701 "read": true, 00:15:16.701 "write": true, 00:15:16.701 "unmap": true, 00:15:16.701 "flush": true, 00:15:16.701 "reset": true, 00:15:16.701 "nvme_admin": false, 00:15:16.701 "nvme_io": false, 00:15:16.701 "nvme_io_md": false, 00:15:16.701 "write_zeroes": true, 00:15:16.701 "zcopy": true, 00:15:16.701 "get_zone_info": false, 00:15:16.701 "zone_management": false, 00:15:16.701 "zone_append": false, 00:15:16.701 "compare": false, 00:15:16.701 "compare_and_write": false, 00:15:16.701 "abort": true, 00:15:16.701 "seek_hole": false, 00:15:16.701 "seek_data": false, 00:15:16.701 "copy": true, 00:15:16.701 "nvme_iov_md": false 00:15:16.701 }, 00:15:16.701 "memory_domains": [ 00:15:16.701 { 00:15:16.701 "dma_device_id": "system", 00:15:16.701 "dma_device_type": 1 00:15:16.701 }, 00:15:16.701 { 00:15:16.701 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:16.701 "dma_device_type": 2 00:15:16.701 } 00:15:16.701 ], 00:15:16.701 "driver_specific": {} 00:15:16.701 } 00:15:16.701 ] 00:15:16.701 20:10:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.701 20:10:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:16.701 20:10:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:16.701 20:10:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:16.701 20:10:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:16.701 20:10:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:16.701 20:10:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:16.701 20:10:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:16.701 20:10:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:16.701 20:10:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:16.701 20:10:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:16.701 20:10:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:16.701 20:10:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:16.701 20:10:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:16.701 20:10:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:16.701 20:10:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:16.701 20:10:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.701 20:10:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.701 20:10:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.701 20:10:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:16.701 "name": "Existed_Raid", 00:15:16.701 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:16.701 "strip_size_kb": 64, 00:15:16.701 "state": "configuring", 00:15:16.701 "raid_level": "raid5f", 00:15:16.701 "superblock": false, 00:15:16.702 "num_base_bdevs": 4, 00:15:16.702 "num_base_bdevs_discovered": 3, 00:15:16.702 "num_base_bdevs_operational": 4, 00:15:16.702 "base_bdevs_list": [ 00:15:16.702 { 00:15:16.702 "name": "BaseBdev1", 00:15:16.702 "uuid": "364066df-adea-4728-814d-aa8793133386", 00:15:16.702 "is_configured": true, 00:15:16.702 "data_offset": 0, 00:15:16.702 "data_size": 65536 00:15:16.702 }, 00:15:16.702 { 00:15:16.702 "name": "BaseBdev2", 00:15:16.702 "uuid": "bf518791-b8b7-4ea4-b0aa-556deffa7250", 00:15:16.702 "is_configured": true, 00:15:16.702 "data_offset": 0, 00:15:16.702 "data_size": 65536 00:15:16.702 }, 00:15:16.702 { 00:15:16.702 "name": "BaseBdev3", 00:15:16.702 "uuid": "7eaf38e6-ee39-4298-a559-e97a42957910", 00:15:16.702 "is_configured": true, 00:15:16.702 "data_offset": 0, 00:15:16.702 "data_size": 65536 00:15:16.702 }, 00:15:16.702 { 00:15:16.702 "name": "BaseBdev4", 00:15:16.702 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:16.702 "is_configured": false, 00:15:16.702 "data_offset": 0, 00:15:16.702 "data_size": 0 00:15:16.702 } 00:15:16.702 ] 00:15:16.702 }' 00:15:16.702 20:10:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:16.702 20:10:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.271 20:10:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:15:17.271 20:10:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.271 20:10:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.271 [2024-12-08 20:10:49.045323] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:17.271 [2024-12-08 20:10:49.045389] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:17.271 [2024-12-08 20:10:49.045399] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:15:17.271 [2024-12-08 20:10:49.045654] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:17.271 [2024-12-08 20:10:49.052548] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:17.271 [2024-12-08 20:10:49.052575] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:15:17.271 [2024-12-08 20:10:49.052846] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:17.271 BaseBdev4 00:15:17.271 20:10:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.271 20:10:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:15:17.271 20:10:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:15:17.271 20:10:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:17.271 20:10:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:17.271 20:10:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:17.271 20:10:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:17.271 20:10:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:17.271 20:10:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.271 20:10:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.271 20:10:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.271 20:10:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:17.271 20:10:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.271 20:10:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.271 [ 00:15:17.271 { 00:15:17.271 "name": "BaseBdev4", 00:15:17.271 "aliases": [ 00:15:17.271 "02a8e070-74ee-4b28-bc1e-fffc748fda60" 00:15:17.271 ], 00:15:17.272 "product_name": "Malloc disk", 00:15:17.272 "block_size": 512, 00:15:17.272 "num_blocks": 65536, 00:15:17.272 "uuid": "02a8e070-74ee-4b28-bc1e-fffc748fda60", 00:15:17.272 "assigned_rate_limits": { 00:15:17.272 "rw_ios_per_sec": 0, 00:15:17.272 "rw_mbytes_per_sec": 0, 00:15:17.272 "r_mbytes_per_sec": 0, 00:15:17.272 "w_mbytes_per_sec": 0 00:15:17.272 }, 00:15:17.272 "claimed": true, 00:15:17.272 "claim_type": "exclusive_write", 00:15:17.272 "zoned": false, 00:15:17.272 "supported_io_types": { 00:15:17.272 "read": true, 00:15:17.272 "write": true, 00:15:17.272 "unmap": true, 00:15:17.272 "flush": true, 00:15:17.272 "reset": true, 00:15:17.272 "nvme_admin": false, 00:15:17.272 "nvme_io": false, 00:15:17.272 "nvme_io_md": false, 00:15:17.272 "write_zeroes": true, 00:15:17.272 "zcopy": true, 00:15:17.272 "get_zone_info": false, 00:15:17.272 "zone_management": false, 00:15:17.272 "zone_append": false, 00:15:17.272 "compare": false, 00:15:17.272 "compare_and_write": false, 00:15:17.272 "abort": true, 00:15:17.272 "seek_hole": false, 00:15:17.272 "seek_data": false, 00:15:17.272 "copy": true, 00:15:17.272 "nvme_iov_md": false 00:15:17.272 }, 00:15:17.272 "memory_domains": [ 00:15:17.272 { 00:15:17.272 "dma_device_id": "system", 00:15:17.272 "dma_device_type": 1 00:15:17.272 }, 00:15:17.272 { 00:15:17.272 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:17.272 "dma_device_type": 2 00:15:17.272 } 00:15:17.272 ], 00:15:17.272 "driver_specific": {} 00:15:17.272 } 00:15:17.272 ] 00:15:17.272 20:10:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.272 20:10:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:17.272 20:10:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:17.272 20:10:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:17.272 20:10:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:15:17.272 20:10:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:17.272 20:10:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:17.272 20:10:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:17.272 20:10:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:17.272 20:10:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:17.272 20:10:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:17.272 20:10:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:17.272 20:10:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:17.272 20:10:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:17.272 20:10:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:17.272 20:10:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.272 20:10:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.272 20:10:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:17.272 20:10:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.272 20:10:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:17.272 "name": "Existed_Raid", 00:15:17.272 "uuid": "65e7aa32-f471-45c8-9985-4eaf215f6ad0", 00:15:17.272 "strip_size_kb": 64, 00:15:17.272 "state": "online", 00:15:17.272 "raid_level": "raid5f", 00:15:17.272 "superblock": false, 00:15:17.272 "num_base_bdevs": 4, 00:15:17.272 "num_base_bdevs_discovered": 4, 00:15:17.272 "num_base_bdevs_operational": 4, 00:15:17.272 "base_bdevs_list": [ 00:15:17.272 { 00:15:17.272 "name": "BaseBdev1", 00:15:17.272 "uuid": "364066df-adea-4728-814d-aa8793133386", 00:15:17.272 "is_configured": true, 00:15:17.272 "data_offset": 0, 00:15:17.272 "data_size": 65536 00:15:17.272 }, 00:15:17.272 { 00:15:17.272 "name": "BaseBdev2", 00:15:17.272 "uuid": "bf518791-b8b7-4ea4-b0aa-556deffa7250", 00:15:17.272 "is_configured": true, 00:15:17.272 "data_offset": 0, 00:15:17.272 "data_size": 65536 00:15:17.272 }, 00:15:17.272 { 00:15:17.272 "name": "BaseBdev3", 00:15:17.272 "uuid": "7eaf38e6-ee39-4298-a559-e97a42957910", 00:15:17.272 "is_configured": true, 00:15:17.272 "data_offset": 0, 00:15:17.272 "data_size": 65536 00:15:17.272 }, 00:15:17.272 { 00:15:17.272 "name": "BaseBdev4", 00:15:17.272 "uuid": "02a8e070-74ee-4b28-bc1e-fffc748fda60", 00:15:17.272 "is_configured": true, 00:15:17.272 "data_offset": 0, 00:15:17.272 "data_size": 65536 00:15:17.272 } 00:15:17.272 ] 00:15:17.272 }' 00:15:17.272 20:10:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:17.272 20:10:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.842 20:10:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:17.842 20:10:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:17.842 20:10:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:17.842 20:10:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:17.842 20:10:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:17.842 20:10:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:17.842 20:10:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:17.842 20:10:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:17.842 20:10:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.842 20:10:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.842 [2024-12-08 20:10:49.568193] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:17.842 20:10:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.842 20:10:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:17.842 "name": "Existed_Raid", 00:15:17.842 "aliases": [ 00:15:17.842 "65e7aa32-f471-45c8-9985-4eaf215f6ad0" 00:15:17.842 ], 00:15:17.842 "product_name": "Raid Volume", 00:15:17.842 "block_size": 512, 00:15:17.842 "num_blocks": 196608, 00:15:17.842 "uuid": "65e7aa32-f471-45c8-9985-4eaf215f6ad0", 00:15:17.842 "assigned_rate_limits": { 00:15:17.842 "rw_ios_per_sec": 0, 00:15:17.843 "rw_mbytes_per_sec": 0, 00:15:17.843 "r_mbytes_per_sec": 0, 00:15:17.843 "w_mbytes_per_sec": 0 00:15:17.843 }, 00:15:17.843 "claimed": false, 00:15:17.843 "zoned": false, 00:15:17.843 "supported_io_types": { 00:15:17.843 "read": true, 00:15:17.843 "write": true, 00:15:17.843 "unmap": false, 00:15:17.843 "flush": false, 00:15:17.843 "reset": true, 00:15:17.843 "nvme_admin": false, 00:15:17.843 "nvme_io": false, 00:15:17.843 "nvme_io_md": false, 00:15:17.843 "write_zeroes": true, 00:15:17.843 "zcopy": false, 00:15:17.843 "get_zone_info": false, 00:15:17.843 "zone_management": false, 00:15:17.843 "zone_append": false, 00:15:17.843 "compare": false, 00:15:17.843 "compare_and_write": false, 00:15:17.843 "abort": false, 00:15:17.843 "seek_hole": false, 00:15:17.843 "seek_data": false, 00:15:17.843 "copy": false, 00:15:17.843 "nvme_iov_md": false 00:15:17.843 }, 00:15:17.843 "driver_specific": { 00:15:17.843 "raid": { 00:15:17.843 "uuid": "65e7aa32-f471-45c8-9985-4eaf215f6ad0", 00:15:17.843 "strip_size_kb": 64, 00:15:17.843 "state": "online", 00:15:17.843 "raid_level": "raid5f", 00:15:17.843 "superblock": false, 00:15:17.843 "num_base_bdevs": 4, 00:15:17.843 "num_base_bdevs_discovered": 4, 00:15:17.843 "num_base_bdevs_operational": 4, 00:15:17.843 "base_bdevs_list": [ 00:15:17.843 { 00:15:17.843 "name": "BaseBdev1", 00:15:17.843 "uuid": "364066df-adea-4728-814d-aa8793133386", 00:15:17.843 "is_configured": true, 00:15:17.843 "data_offset": 0, 00:15:17.843 "data_size": 65536 00:15:17.843 }, 00:15:17.843 { 00:15:17.843 "name": "BaseBdev2", 00:15:17.843 "uuid": "bf518791-b8b7-4ea4-b0aa-556deffa7250", 00:15:17.843 "is_configured": true, 00:15:17.843 "data_offset": 0, 00:15:17.843 "data_size": 65536 00:15:17.843 }, 00:15:17.843 { 00:15:17.843 "name": "BaseBdev3", 00:15:17.843 "uuid": "7eaf38e6-ee39-4298-a559-e97a42957910", 00:15:17.843 "is_configured": true, 00:15:17.843 "data_offset": 0, 00:15:17.843 "data_size": 65536 00:15:17.843 }, 00:15:17.843 { 00:15:17.843 "name": "BaseBdev4", 00:15:17.843 "uuid": "02a8e070-74ee-4b28-bc1e-fffc748fda60", 00:15:17.843 "is_configured": true, 00:15:17.843 "data_offset": 0, 00:15:17.843 "data_size": 65536 00:15:17.843 } 00:15:17.843 ] 00:15:17.843 } 00:15:17.843 } 00:15:17.843 }' 00:15:17.843 20:10:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:17.843 20:10:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:17.843 BaseBdev2 00:15:17.843 BaseBdev3 00:15:17.843 BaseBdev4' 00:15:17.843 20:10:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:17.843 20:10:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:17.843 20:10:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:17.843 20:10:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:17.843 20:10:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.843 20:10:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:17.843 20:10:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.843 20:10:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.843 20:10:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:17.843 20:10:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:17.843 20:10:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:17.843 20:10:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:17.843 20:10:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:17.843 20:10:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.843 20:10:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.843 20:10:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.843 20:10:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:17.843 20:10:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:17.843 20:10:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:17.843 20:10:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:17.843 20:10:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.843 20:10:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.843 20:10:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:17.843 20:10:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.843 20:10:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:17.843 20:10:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:17.843 20:10:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:17.843 20:10:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:15:17.843 20:10:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.843 20:10:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:17.843 20:10:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.104 20:10:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.104 20:10:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:18.104 20:10:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:18.104 20:10:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:18.104 20:10:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.104 20:10:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.104 [2024-12-08 20:10:49.843512] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:18.104 20:10:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.104 20:10:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:18.104 20:10:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:15:18.104 20:10:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:18.104 20:10:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:15:18.104 20:10:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:15:18.104 20:10:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:15:18.104 20:10:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:18.104 20:10:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:18.104 20:10:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:18.104 20:10:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:18.104 20:10:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:18.104 20:10:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:18.104 20:10:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:18.104 20:10:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:18.104 20:10:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:18.104 20:10:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:18.104 20:10:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.104 20:10:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:18.104 20:10:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.104 20:10:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.104 20:10:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:18.104 "name": "Existed_Raid", 00:15:18.104 "uuid": "65e7aa32-f471-45c8-9985-4eaf215f6ad0", 00:15:18.104 "strip_size_kb": 64, 00:15:18.104 "state": "online", 00:15:18.104 "raid_level": "raid5f", 00:15:18.104 "superblock": false, 00:15:18.104 "num_base_bdevs": 4, 00:15:18.104 "num_base_bdevs_discovered": 3, 00:15:18.104 "num_base_bdevs_operational": 3, 00:15:18.104 "base_bdevs_list": [ 00:15:18.104 { 00:15:18.104 "name": null, 00:15:18.104 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:18.104 "is_configured": false, 00:15:18.104 "data_offset": 0, 00:15:18.104 "data_size": 65536 00:15:18.104 }, 00:15:18.104 { 00:15:18.104 "name": "BaseBdev2", 00:15:18.104 "uuid": "bf518791-b8b7-4ea4-b0aa-556deffa7250", 00:15:18.104 "is_configured": true, 00:15:18.104 "data_offset": 0, 00:15:18.104 "data_size": 65536 00:15:18.104 }, 00:15:18.104 { 00:15:18.104 "name": "BaseBdev3", 00:15:18.104 "uuid": "7eaf38e6-ee39-4298-a559-e97a42957910", 00:15:18.104 "is_configured": true, 00:15:18.104 "data_offset": 0, 00:15:18.104 "data_size": 65536 00:15:18.104 }, 00:15:18.104 { 00:15:18.104 "name": "BaseBdev4", 00:15:18.104 "uuid": "02a8e070-74ee-4b28-bc1e-fffc748fda60", 00:15:18.104 "is_configured": true, 00:15:18.104 "data_offset": 0, 00:15:18.104 "data_size": 65536 00:15:18.104 } 00:15:18.104 ] 00:15:18.104 }' 00:15:18.104 20:10:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:18.104 20:10:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.674 20:10:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:18.675 20:10:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:18.675 20:10:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:18.675 20:10:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:18.675 20:10:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.675 20:10:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.675 20:10:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.675 20:10:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:18.675 20:10:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:18.675 20:10:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:18.675 20:10:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.675 20:10:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.675 [2024-12-08 20:10:50.407177] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:18.675 [2024-12-08 20:10:50.407272] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:18.675 [2024-12-08 20:10:50.498302] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:18.675 20:10:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.675 20:10:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:18.675 20:10:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:18.675 20:10:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:18.675 20:10:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.675 20:10:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:18.675 20:10:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.675 20:10:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.675 20:10:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:18.675 20:10:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:18.675 20:10:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:15:18.675 20:10:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.675 20:10:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.675 [2024-12-08 20:10:50.550223] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:18.675 20:10:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.675 20:10:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:18.675 20:10:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:18.675 20:10:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:18.675 20:10:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.675 20:10:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:18.675 20:10:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.935 20:10:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.935 20:10:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:18.935 20:10:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:18.935 20:10:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:15:18.935 20:10:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.935 20:10:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.935 [2024-12-08 20:10:50.687159] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:15:18.935 [2024-12-08 20:10:50.687211] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:15:18.935 20:10:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.935 20:10:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:18.936 20:10:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:18.936 20:10:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:18.936 20:10:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.936 20:10:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.936 20:10:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:18.936 20:10:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.936 20:10:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:18.936 20:10:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:18.936 20:10:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:15:18.936 20:10:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:15:18.936 20:10:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:18.936 20:10:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:18.936 20:10:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.936 20:10:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.936 BaseBdev2 00:15:18.936 20:10:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.936 20:10:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:15:18.936 20:10:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:18.936 20:10:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:18.936 20:10:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:18.936 20:10:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:18.936 20:10:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:18.936 20:10:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:18.936 20:10:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.936 20:10:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.936 20:10:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.936 20:10:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:18.936 20:10:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.936 20:10:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.936 [ 00:15:18.936 { 00:15:18.936 "name": "BaseBdev2", 00:15:18.936 "aliases": [ 00:15:18.936 "88971b98-6e50-492f-822b-018c740960a7" 00:15:18.936 ], 00:15:18.936 "product_name": "Malloc disk", 00:15:18.936 "block_size": 512, 00:15:18.936 "num_blocks": 65536, 00:15:18.936 "uuid": "88971b98-6e50-492f-822b-018c740960a7", 00:15:18.936 "assigned_rate_limits": { 00:15:18.936 "rw_ios_per_sec": 0, 00:15:18.936 "rw_mbytes_per_sec": 0, 00:15:18.936 "r_mbytes_per_sec": 0, 00:15:18.936 "w_mbytes_per_sec": 0 00:15:18.936 }, 00:15:18.936 "claimed": false, 00:15:18.936 "zoned": false, 00:15:18.936 "supported_io_types": { 00:15:18.936 "read": true, 00:15:18.936 "write": true, 00:15:18.936 "unmap": true, 00:15:18.936 "flush": true, 00:15:18.936 "reset": true, 00:15:18.936 "nvme_admin": false, 00:15:18.936 "nvme_io": false, 00:15:18.936 "nvme_io_md": false, 00:15:18.936 "write_zeroes": true, 00:15:18.936 "zcopy": true, 00:15:18.936 "get_zone_info": false, 00:15:18.936 "zone_management": false, 00:15:18.936 "zone_append": false, 00:15:18.936 "compare": false, 00:15:18.936 "compare_and_write": false, 00:15:18.936 "abort": true, 00:15:18.936 "seek_hole": false, 00:15:18.936 "seek_data": false, 00:15:18.936 "copy": true, 00:15:18.936 "nvme_iov_md": false 00:15:18.936 }, 00:15:18.936 "memory_domains": [ 00:15:18.936 { 00:15:18.936 "dma_device_id": "system", 00:15:18.936 "dma_device_type": 1 00:15:18.936 }, 00:15:18.936 { 00:15:18.936 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:18.936 "dma_device_type": 2 00:15:18.936 } 00:15:18.936 ], 00:15:18.936 "driver_specific": {} 00:15:18.936 } 00:15:18.936 ] 00:15:18.936 20:10:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.936 20:10:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:18.936 20:10:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:18.936 20:10:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:18.936 20:10:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:18.936 20:10:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.936 20:10:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.197 BaseBdev3 00:15:19.197 20:10:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.197 20:10:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:15:19.197 20:10:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:19.197 20:10:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:19.197 20:10:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:19.197 20:10:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:19.197 20:10:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:19.197 20:10:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:19.197 20:10:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.197 20:10:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.197 20:10:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.197 20:10:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:19.197 20:10:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.197 20:10:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.197 [ 00:15:19.197 { 00:15:19.197 "name": "BaseBdev3", 00:15:19.197 "aliases": [ 00:15:19.197 "cf1ec34f-94c3-450c-84c3-9b8e4f58c936" 00:15:19.197 ], 00:15:19.197 "product_name": "Malloc disk", 00:15:19.197 "block_size": 512, 00:15:19.197 "num_blocks": 65536, 00:15:19.197 "uuid": "cf1ec34f-94c3-450c-84c3-9b8e4f58c936", 00:15:19.197 "assigned_rate_limits": { 00:15:19.197 "rw_ios_per_sec": 0, 00:15:19.197 "rw_mbytes_per_sec": 0, 00:15:19.197 "r_mbytes_per_sec": 0, 00:15:19.197 "w_mbytes_per_sec": 0 00:15:19.197 }, 00:15:19.197 "claimed": false, 00:15:19.197 "zoned": false, 00:15:19.197 "supported_io_types": { 00:15:19.197 "read": true, 00:15:19.197 "write": true, 00:15:19.197 "unmap": true, 00:15:19.197 "flush": true, 00:15:19.197 "reset": true, 00:15:19.197 "nvme_admin": false, 00:15:19.197 "nvme_io": false, 00:15:19.197 "nvme_io_md": false, 00:15:19.197 "write_zeroes": true, 00:15:19.197 "zcopy": true, 00:15:19.197 "get_zone_info": false, 00:15:19.197 "zone_management": false, 00:15:19.197 "zone_append": false, 00:15:19.197 "compare": false, 00:15:19.197 "compare_and_write": false, 00:15:19.197 "abort": true, 00:15:19.197 "seek_hole": false, 00:15:19.197 "seek_data": false, 00:15:19.197 "copy": true, 00:15:19.197 "nvme_iov_md": false 00:15:19.197 }, 00:15:19.197 "memory_domains": [ 00:15:19.197 { 00:15:19.197 "dma_device_id": "system", 00:15:19.197 "dma_device_type": 1 00:15:19.197 }, 00:15:19.197 { 00:15:19.197 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:19.197 "dma_device_type": 2 00:15:19.197 } 00:15:19.197 ], 00:15:19.197 "driver_specific": {} 00:15:19.197 } 00:15:19.197 ] 00:15:19.197 20:10:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.197 20:10:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:19.197 20:10:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:19.197 20:10:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:19.197 20:10:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:15:19.197 20:10:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.197 20:10:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.197 BaseBdev4 00:15:19.197 20:10:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.197 20:10:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:15:19.197 20:10:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:15:19.197 20:10:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:19.197 20:10:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:19.197 20:10:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:19.197 20:10:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:19.197 20:10:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:19.197 20:10:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.197 20:10:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.197 20:10:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.197 20:10:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:19.197 20:10:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.197 20:10:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.197 [ 00:15:19.197 { 00:15:19.197 "name": "BaseBdev4", 00:15:19.197 "aliases": [ 00:15:19.197 "59329119-7622-4acb-8c8b-cca3e1b0b2ba" 00:15:19.197 ], 00:15:19.197 "product_name": "Malloc disk", 00:15:19.197 "block_size": 512, 00:15:19.197 "num_blocks": 65536, 00:15:19.197 "uuid": "59329119-7622-4acb-8c8b-cca3e1b0b2ba", 00:15:19.197 "assigned_rate_limits": { 00:15:19.197 "rw_ios_per_sec": 0, 00:15:19.197 "rw_mbytes_per_sec": 0, 00:15:19.197 "r_mbytes_per_sec": 0, 00:15:19.197 "w_mbytes_per_sec": 0 00:15:19.197 }, 00:15:19.197 "claimed": false, 00:15:19.197 "zoned": false, 00:15:19.197 "supported_io_types": { 00:15:19.197 "read": true, 00:15:19.197 "write": true, 00:15:19.197 "unmap": true, 00:15:19.197 "flush": true, 00:15:19.197 "reset": true, 00:15:19.197 "nvme_admin": false, 00:15:19.197 "nvme_io": false, 00:15:19.197 "nvme_io_md": false, 00:15:19.197 "write_zeroes": true, 00:15:19.197 "zcopy": true, 00:15:19.197 "get_zone_info": false, 00:15:19.197 "zone_management": false, 00:15:19.197 "zone_append": false, 00:15:19.197 "compare": false, 00:15:19.197 "compare_and_write": false, 00:15:19.197 "abort": true, 00:15:19.197 "seek_hole": false, 00:15:19.197 "seek_data": false, 00:15:19.197 "copy": true, 00:15:19.197 "nvme_iov_md": false 00:15:19.197 }, 00:15:19.197 "memory_domains": [ 00:15:19.197 { 00:15:19.197 "dma_device_id": "system", 00:15:19.197 "dma_device_type": 1 00:15:19.197 }, 00:15:19.197 { 00:15:19.197 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:19.197 "dma_device_type": 2 00:15:19.197 } 00:15:19.197 ], 00:15:19.197 "driver_specific": {} 00:15:19.197 } 00:15:19.197 ] 00:15:19.197 20:10:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.197 20:10:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:19.197 20:10:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:19.197 20:10:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:19.197 20:10:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:19.197 20:10:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.197 20:10:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.197 [2024-12-08 20:10:51.050330] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:19.197 [2024-12-08 20:10:51.050422] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:19.197 [2024-12-08 20:10:51.050462] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:19.197 [2024-12-08 20:10:51.052216] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:19.197 [2024-12-08 20:10:51.052303] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:19.198 20:10:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.198 20:10:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:19.198 20:10:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:19.198 20:10:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:19.198 20:10:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:19.198 20:10:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:19.198 20:10:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:19.198 20:10:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:19.198 20:10:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:19.198 20:10:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:19.198 20:10:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:19.198 20:10:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:19.198 20:10:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:19.198 20:10:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.198 20:10:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.198 20:10:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.198 20:10:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:19.198 "name": "Existed_Raid", 00:15:19.198 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:19.198 "strip_size_kb": 64, 00:15:19.198 "state": "configuring", 00:15:19.198 "raid_level": "raid5f", 00:15:19.198 "superblock": false, 00:15:19.198 "num_base_bdevs": 4, 00:15:19.198 "num_base_bdevs_discovered": 3, 00:15:19.198 "num_base_bdevs_operational": 4, 00:15:19.198 "base_bdevs_list": [ 00:15:19.198 { 00:15:19.198 "name": "BaseBdev1", 00:15:19.198 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:19.198 "is_configured": false, 00:15:19.198 "data_offset": 0, 00:15:19.198 "data_size": 0 00:15:19.198 }, 00:15:19.198 { 00:15:19.198 "name": "BaseBdev2", 00:15:19.198 "uuid": "88971b98-6e50-492f-822b-018c740960a7", 00:15:19.198 "is_configured": true, 00:15:19.198 "data_offset": 0, 00:15:19.198 "data_size": 65536 00:15:19.198 }, 00:15:19.198 { 00:15:19.198 "name": "BaseBdev3", 00:15:19.198 "uuid": "cf1ec34f-94c3-450c-84c3-9b8e4f58c936", 00:15:19.198 "is_configured": true, 00:15:19.198 "data_offset": 0, 00:15:19.198 "data_size": 65536 00:15:19.198 }, 00:15:19.198 { 00:15:19.198 "name": "BaseBdev4", 00:15:19.198 "uuid": "59329119-7622-4acb-8c8b-cca3e1b0b2ba", 00:15:19.198 "is_configured": true, 00:15:19.198 "data_offset": 0, 00:15:19.198 "data_size": 65536 00:15:19.198 } 00:15:19.198 ] 00:15:19.198 }' 00:15:19.198 20:10:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:19.198 20:10:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.767 20:10:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:19.767 20:10:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.767 20:10:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.767 [2024-12-08 20:10:51.497563] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:19.767 20:10:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.767 20:10:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:19.767 20:10:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:19.767 20:10:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:19.767 20:10:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:19.767 20:10:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:19.767 20:10:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:19.767 20:10:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:19.767 20:10:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:19.767 20:10:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:19.767 20:10:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:19.767 20:10:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:19.767 20:10:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:19.767 20:10:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.767 20:10:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.767 20:10:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.767 20:10:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:19.767 "name": "Existed_Raid", 00:15:19.767 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:19.767 "strip_size_kb": 64, 00:15:19.767 "state": "configuring", 00:15:19.767 "raid_level": "raid5f", 00:15:19.767 "superblock": false, 00:15:19.767 "num_base_bdevs": 4, 00:15:19.767 "num_base_bdevs_discovered": 2, 00:15:19.767 "num_base_bdevs_operational": 4, 00:15:19.767 "base_bdevs_list": [ 00:15:19.767 { 00:15:19.767 "name": "BaseBdev1", 00:15:19.767 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:19.767 "is_configured": false, 00:15:19.767 "data_offset": 0, 00:15:19.767 "data_size": 0 00:15:19.767 }, 00:15:19.767 { 00:15:19.767 "name": null, 00:15:19.767 "uuid": "88971b98-6e50-492f-822b-018c740960a7", 00:15:19.767 "is_configured": false, 00:15:19.767 "data_offset": 0, 00:15:19.767 "data_size": 65536 00:15:19.767 }, 00:15:19.767 { 00:15:19.767 "name": "BaseBdev3", 00:15:19.767 "uuid": "cf1ec34f-94c3-450c-84c3-9b8e4f58c936", 00:15:19.767 "is_configured": true, 00:15:19.767 "data_offset": 0, 00:15:19.767 "data_size": 65536 00:15:19.767 }, 00:15:19.767 { 00:15:19.767 "name": "BaseBdev4", 00:15:19.767 "uuid": "59329119-7622-4acb-8c8b-cca3e1b0b2ba", 00:15:19.767 "is_configured": true, 00:15:19.767 "data_offset": 0, 00:15:19.767 "data_size": 65536 00:15:19.767 } 00:15:19.767 ] 00:15:19.767 }' 00:15:19.767 20:10:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:19.767 20:10:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.028 20:10:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:20.028 20:10:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.028 20:10:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.028 20:10:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:20.028 20:10:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.028 20:10:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:15:20.028 20:10:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:20.028 20:10:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.028 20:10:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.028 [2024-12-08 20:10:51.983806] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:20.028 BaseBdev1 00:15:20.028 20:10:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.028 20:10:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:15:20.028 20:10:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:20.028 20:10:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:20.028 20:10:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:20.028 20:10:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:20.028 20:10:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:20.028 20:10:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:20.028 20:10:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.028 20:10:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.028 20:10:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.028 20:10:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:20.028 20:10:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.028 20:10:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.288 [ 00:15:20.288 { 00:15:20.288 "name": "BaseBdev1", 00:15:20.289 "aliases": [ 00:15:20.289 "55989431-4f45-4a79-b7a7-2fedc07b4b53" 00:15:20.289 ], 00:15:20.289 "product_name": "Malloc disk", 00:15:20.289 "block_size": 512, 00:15:20.289 "num_blocks": 65536, 00:15:20.289 "uuid": "55989431-4f45-4a79-b7a7-2fedc07b4b53", 00:15:20.289 "assigned_rate_limits": { 00:15:20.289 "rw_ios_per_sec": 0, 00:15:20.289 "rw_mbytes_per_sec": 0, 00:15:20.289 "r_mbytes_per_sec": 0, 00:15:20.289 "w_mbytes_per_sec": 0 00:15:20.289 }, 00:15:20.289 "claimed": true, 00:15:20.289 "claim_type": "exclusive_write", 00:15:20.289 "zoned": false, 00:15:20.289 "supported_io_types": { 00:15:20.289 "read": true, 00:15:20.289 "write": true, 00:15:20.289 "unmap": true, 00:15:20.289 "flush": true, 00:15:20.289 "reset": true, 00:15:20.289 "nvme_admin": false, 00:15:20.289 "nvme_io": false, 00:15:20.289 "nvme_io_md": false, 00:15:20.289 "write_zeroes": true, 00:15:20.289 "zcopy": true, 00:15:20.289 "get_zone_info": false, 00:15:20.289 "zone_management": false, 00:15:20.289 "zone_append": false, 00:15:20.289 "compare": false, 00:15:20.289 "compare_and_write": false, 00:15:20.289 "abort": true, 00:15:20.289 "seek_hole": false, 00:15:20.289 "seek_data": false, 00:15:20.289 "copy": true, 00:15:20.289 "nvme_iov_md": false 00:15:20.289 }, 00:15:20.289 "memory_domains": [ 00:15:20.289 { 00:15:20.289 "dma_device_id": "system", 00:15:20.289 "dma_device_type": 1 00:15:20.289 }, 00:15:20.289 { 00:15:20.289 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:20.289 "dma_device_type": 2 00:15:20.289 } 00:15:20.289 ], 00:15:20.289 "driver_specific": {} 00:15:20.289 } 00:15:20.289 ] 00:15:20.289 20:10:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.289 20:10:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:20.289 20:10:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:20.289 20:10:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:20.289 20:10:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:20.289 20:10:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:20.289 20:10:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:20.289 20:10:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:20.289 20:10:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:20.289 20:10:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:20.289 20:10:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:20.289 20:10:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:20.289 20:10:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:20.289 20:10:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:20.289 20:10:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.289 20:10:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.289 20:10:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.289 20:10:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:20.289 "name": "Existed_Raid", 00:15:20.289 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:20.289 "strip_size_kb": 64, 00:15:20.289 "state": "configuring", 00:15:20.289 "raid_level": "raid5f", 00:15:20.289 "superblock": false, 00:15:20.289 "num_base_bdevs": 4, 00:15:20.289 "num_base_bdevs_discovered": 3, 00:15:20.289 "num_base_bdevs_operational": 4, 00:15:20.289 "base_bdevs_list": [ 00:15:20.289 { 00:15:20.289 "name": "BaseBdev1", 00:15:20.289 "uuid": "55989431-4f45-4a79-b7a7-2fedc07b4b53", 00:15:20.289 "is_configured": true, 00:15:20.289 "data_offset": 0, 00:15:20.289 "data_size": 65536 00:15:20.289 }, 00:15:20.289 { 00:15:20.289 "name": null, 00:15:20.289 "uuid": "88971b98-6e50-492f-822b-018c740960a7", 00:15:20.289 "is_configured": false, 00:15:20.289 "data_offset": 0, 00:15:20.289 "data_size": 65536 00:15:20.289 }, 00:15:20.289 { 00:15:20.289 "name": "BaseBdev3", 00:15:20.289 "uuid": "cf1ec34f-94c3-450c-84c3-9b8e4f58c936", 00:15:20.289 "is_configured": true, 00:15:20.289 "data_offset": 0, 00:15:20.289 "data_size": 65536 00:15:20.289 }, 00:15:20.289 { 00:15:20.289 "name": "BaseBdev4", 00:15:20.289 "uuid": "59329119-7622-4acb-8c8b-cca3e1b0b2ba", 00:15:20.289 "is_configured": true, 00:15:20.289 "data_offset": 0, 00:15:20.289 "data_size": 65536 00:15:20.289 } 00:15:20.289 ] 00:15:20.289 }' 00:15:20.289 20:10:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:20.289 20:10:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.550 20:10:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:20.550 20:10:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:20.550 20:10:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.550 20:10:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.550 20:10:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.550 20:10:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:15:20.550 20:10:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:15:20.550 20:10:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.550 20:10:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.550 [2024-12-08 20:10:52.518948] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:20.550 20:10:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.550 20:10:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:20.550 20:10:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:20.551 20:10:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:20.551 20:10:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:20.551 20:10:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:20.551 20:10:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:20.551 20:10:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:20.551 20:10:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:20.551 20:10:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:20.811 20:10:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:20.811 20:10:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:20.811 20:10:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:20.811 20:10:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.811 20:10:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.811 20:10:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.811 20:10:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:20.811 "name": "Existed_Raid", 00:15:20.811 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:20.811 "strip_size_kb": 64, 00:15:20.811 "state": "configuring", 00:15:20.811 "raid_level": "raid5f", 00:15:20.811 "superblock": false, 00:15:20.811 "num_base_bdevs": 4, 00:15:20.811 "num_base_bdevs_discovered": 2, 00:15:20.811 "num_base_bdevs_operational": 4, 00:15:20.811 "base_bdevs_list": [ 00:15:20.811 { 00:15:20.811 "name": "BaseBdev1", 00:15:20.811 "uuid": "55989431-4f45-4a79-b7a7-2fedc07b4b53", 00:15:20.811 "is_configured": true, 00:15:20.811 "data_offset": 0, 00:15:20.811 "data_size": 65536 00:15:20.811 }, 00:15:20.811 { 00:15:20.811 "name": null, 00:15:20.811 "uuid": "88971b98-6e50-492f-822b-018c740960a7", 00:15:20.811 "is_configured": false, 00:15:20.811 "data_offset": 0, 00:15:20.811 "data_size": 65536 00:15:20.811 }, 00:15:20.811 { 00:15:20.811 "name": null, 00:15:20.811 "uuid": "cf1ec34f-94c3-450c-84c3-9b8e4f58c936", 00:15:20.811 "is_configured": false, 00:15:20.811 "data_offset": 0, 00:15:20.811 "data_size": 65536 00:15:20.811 }, 00:15:20.811 { 00:15:20.811 "name": "BaseBdev4", 00:15:20.811 "uuid": "59329119-7622-4acb-8c8b-cca3e1b0b2ba", 00:15:20.811 "is_configured": true, 00:15:20.811 "data_offset": 0, 00:15:20.811 "data_size": 65536 00:15:20.811 } 00:15:20.811 ] 00:15:20.811 }' 00:15:20.811 20:10:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:20.811 20:10:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.071 20:10:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:21.071 20:10:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:21.072 20:10:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.072 20:10:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.072 20:10:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.072 20:10:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:15:21.072 20:10:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:21.072 20:10:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.072 20:10:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.072 [2024-12-08 20:10:52.978158] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:21.072 20:10:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.072 20:10:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:21.072 20:10:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:21.072 20:10:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:21.072 20:10:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:21.072 20:10:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:21.072 20:10:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:21.072 20:10:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:21.072 20:10:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:21.072 20:10:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:21.072 20:10:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:21.072 20:10:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:21.072 20:10:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:21.072 20:10:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.072 20:10:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.072 20:10:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.072 20:10:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:21.072 "name": "Existed_Raid", 00:15:21.072 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:21.072 "strip_size_kb": 64, 00:15:21.072 "state": "configuring", 00:15:21.072 "raid_level": "raid5f", 00:15:21.072 "superblock": false, 00:15:21.072 "num_base_bdevs": 4, 00:15:21.072 "num_base_bdevs_discovered": 3, 00:15:21.072 "num_base_bdevs_operational": 4, 00:15:21.072 "base_bdevs_list": [ 00:15:21.072 { 00:15:21.072 "name": "BaseBdev1", 00:15:21.072 "uuid": "55989431-4f45-4a79-b7a7-2fedc07b4b53", 00:15:21.072 "is_configured": true, 00:15:21.072 "data_offset": 0, 00:15:21.072 "data_size": 65536 00:15:21.072 }, 00:15:21.072 { 00:15:21.072 "name": null, 00:15:21.072 "uuid": "88971b98-6e50-492f-822b-018c740960a7", 00:15:21.072 "is_configured": false, 00:15:21.072 "data_offset": 0, 00:15:21.072 "data_size": 65536 00:15:21.072 }, 00:15:21.072 { 00:15:21.072 "name": "BaseBdev3", 00:15:21.072 "uuid": "cf1ec34f-94c3-450c-84c3-9b8e4f58c936", 00:15:21.072 "is_configured": true, 00:15:21.072 "data_offset": 0, 00:15:21.072 "data_size": 65536 00:15:21.072 }, 00:15:21.072 { 00:15:21.072 "name": "BaseBdev4", 00:15:21.072 "uuid": "59329119-7622-4acb-8c8b-cca3e1b0b2ba", 00:15:21.072 "is_configured": true, 00:15:21.072 "data_offset": 0, 00:15:21.072 "data_size": 65536 00:15:21.072 } 00:15:21.072 ] 00:15:21.072 }' 00:15:21.072 20:10:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:21.072 20:10:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.642 20:10:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:21.642 20:10:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:21.642 20:10:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.642 20:10:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.642 20:10:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.642 20:10:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:15:21.642 20:10:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:21.642 20:10:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.642 20:10:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.642 [2024-12-08 20:10:53.429407] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:21.642 20:10:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.642 20:10:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:21.642 20:10:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:21.642 20:10:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:21.642 20:10:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:21.642 20:10:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:21.642 20:10:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:21.642 20:10:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:21.642 20:10:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:21.642 20:10:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:21.642 20:10:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:21.642 20:10:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:21.642 20:10:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:21.642 20:10:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.642 20:10:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.642 20:10:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.642 20:10:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:21.642 "name": "Existed_Raid", 00:15:21.642 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:21.642 "strip_size_kb": 64, 00:15:21.642 "state": "configuring", 00:15:21.642 "raid_level": "raid5f", 00:15:21.642 "superblock": false, 00:15:21.642 "num_base_bdevs": 4, 00:15:21.642 "num_base_bdevs_discovered": 2, 00:15:21.642 "num_base_bdevs_operational": 4, 00:15:21.642 "base_bdevs_list": [ 00:15:21.642 { 00:15:21.642 "name": null, 00:15:21.642 "uuid": "55989431-4f45-4a79-b7a7-2fedc07b4b53", 00:15:21.642 "is_configured": false, 00:15:21.642 "data_offset": 0, 00:15:21.642 "data_size": 65536 00:15:21.642 }, 00:15:21.642 { 00:15:21.642 "name": null, 00:15:21.642 "uuid": "88971b98-6e50-492f-822b-018c740960a7", 00:15:21.642 "is_configured": false, 00:15:21.642 "data_offset": 0, 00:15:21.642 "data_size": 65536 00:15:21.642 }, 00:15:21.642 { 00:15:21.642 "name": "BaseBdev3", 00:15:21.642 "uuid": "cf1ec34f-94c3-450c-84c3-9b8e4f58c936", 00:15:21.642 "is_configured": true, 00:15:21.642 "data_offset": 0, 00:15:21.642 "data_size": 65536 00:15:21.642 }, 00:15:21.642 { 00:15:21.642 "name": "BaseBdev4", 00:15:21.642 "uuid": "59329119-7622-4acb-8c8b-cca3e1b0b2ba", 00:15:21.642 "is_configured": true, 00:15:21.642 "data_offset": 0, 00:15:21.642 "data_size": 65536 00:15:21.642 } 00:15:21.642 ] 00:15:21.642 }' 00:15:21.642 20:10:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:21.642 20:10:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.211 20:10:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:22.211 20:10:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.211 20:10:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.211 20:10:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:22.211 20:10:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.211 20:10:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:15:22.212 20:10:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:22.212 20:10:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.212 20:10:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.212 [2024-12-08 20:10:53.984741] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:22.212 20:10:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.212 20:10:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:22.212 20:10:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:22.212 20:10:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:22.212 20:10:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:22.212 20:10:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:22.212 20:10:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:22.212 20:10:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:22.212 20:10:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:22.212 20:10:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:22.212 20:10:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:22.212 20:10:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:22.212 20:10:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.212 20:10:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.212 20:10:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:22.212 20:10:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.212 20:10:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:22.212 "name": "Existed_Raid", 00:15:22.212 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:22.212 "strip_size_kb": 64, 00:15:22.212 "state": "configuring", 00:15:22.212 "raid_level": "raid5f", 00:15:22.212 "superblock": false, 00:15:22.212 "num_base_bdevs": 4, 00:15:22.212 "num_base_bdevs_discovered": 3, 00:15:22.212 "num_base_bdevs_operational": 4, 00:15:22.212 "base_bdevs_list": [ 00:15:22.212 { 00:15:22.212 "name": null, 00:15:22.212 "uuid": "55989431-4f45-4a79-b7a7-2fedc07b4b53", 00:15:22.212 "is_configured": false, 00:15:22.212 "data_offset": 0, 00:15:22.212 "data_size": 65536 00:15:22.212 }, 00:15:22.212 { 00:15:22.212 "name": "BaseBdev2", 00:15:22.212 "uuid": "88971b98-6e50-492f-822b-018c740960a7", 00:15:22.212 "is_configured": true, 00:15:22.212 "data_offset": 0, 00:15:22.212 "data_size": 65536 00:15:22.212 }, 00:15:22.212 { 00:15:22.212 "name": "BaseBdev3", 00:15:22.212 "uuid": "cf1ec34f-94c3-450c-84c3-9b8e4f58c936", 00:15:22.212 "is_configured": true, 00:15:22.212 "data_offset": 0, 00:15:22.212 "data_size": 65536 00:15:22.212 }, 00:15:22.212 { 00:15:22.212 "name": "BaseBdev4", 00:15:22.212 "uuid": "59329119-7622-4acb-8c8b-cca3e1b0b2ba", 00:15:22.212 "is_configured": true, 00:15:22.212 "data_offset": 0, 00:15:22.212 "data_size": 65536 00:15:22.212 } 00:15:22.212 ] 00:15:22.212 }' 00:15:22.212 20:10:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:22.212 20:10:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.471 20:10:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:22.471 20:10:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.471 20:10:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.471 20:10:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:22.471 20:10:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.471 20:10:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:15:22.471 20:10:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:22.471 20:10:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.471 20:10:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.471 20:10:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:22.731 20:10:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.731 20:10:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 55989431-4f45-4a79-b7a7-2fedc07b4b53 00:15:22.731 20:10:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.731 20:10:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.731 [2024-12-08 20:10:54.490304] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:22.731 [2024-12-08 20:10:54.490352] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:22.731 [2024-12-08 20:10:54.490359] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:15:22.731 [2024-12-08 20:10:54.490595] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:15:22.731 [2024-12-08 20:10:54.497431] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:22.731 [2024-12-08 20:10:54.497454] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:15:22.731 [2024-12-08 20:10:54.497699] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:22.731 NewBaseBdev 00:15:22.731 20:10:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.731 20:10:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:15:22.731 20:10:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:15:22.731 20:10:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:22.731 20:10:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:22.732 20:10:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:22.732 20:10:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:22.732 20:10:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:22.732 20:10:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.732 20:10:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.732 20:10:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.732 20:10:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:22.732 20:10:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.732 20:10:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.732 [ 00:15:22.732 { 00:15:22.732 "name": "NewBaseBdev", 00:15:22.732 "aliases": [ 00:15:22.732 "55989431-4f45-4a79-b7a7-2fedc07b4b53" 00:15:22.732 ], 00:15:22.732 "product_name": "Malloc disk", 00:15:22.732 "block_size": 512, 00:15:22.732 "num_blocks": 65536, 00:15:22.732 "uuid": "55989431-4f45-4a79-b7a7-2fedc07b4b53", 00:15:22.732 "assigned_rate_limits": { 00:15:22.732 "rw_ios_per_sec": 0, 00:15:22.732 "rw_mbytes_per_sec": 0, 00:15:22.732 "r_mbytes_per_sec": 0, 00:15:22.732 "w_mbytes_per_sec": 0 00:15:22.732 }, 00:15:22.732 "claimed": true, 00:15:22.732 "claim_type": "exclusive_write", 00:15:22.732 "zoned": false, 00:15:22.732 "supported_io_types": { 00:15:22.732 "read": true, 00:15:22.732 "write": true, 00:15:22.732 "unmap": true, 00:15:22.732 "flush": true, 00:15:22.732 "reset": true, 00:15:22.732 "nvme_admin": false, 00:15:22.732 "nvme_io": false, 00:15:22.732 "nvme_io_md": false, 00:15:22.732 "write_zeroes": true, 00:15:22.732 "zcopy": true, 00:15:22.732 "get_zone_info": false, 00:15:22.732 "zone_management": false, 00:15:22.732 "zone_append": false, 00:15:22.732 "compare": false, 00:15:22.732 "compare_and_write": false, 00:15:22.732 "abort": true, 00:15:22.732 "seek_hole": false, 00:15:22.732 "seek_data": false, 00:15:22.732 "copy": true, 00:15:22.732 "nvme_iov_md": false 00:15:22.732 }, 00:15:22.732 "memory_domains": [ 00:15:22.732 { 00:15:22.732 "dma_device_id": "system", 00:15:22.732 "dma_device_type": 1 00:15:22.732 }, 00:15:22.732 { 00:15:22.732 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:22.732 "dma_device_type": 2 00:15:22.732 } 00:15:22.732 ], 00:15:22.732 "driver_specific": {} 00:15:22.732 } 00:15:22.732 ] 00:15:22.732 20:10:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.732 20:10:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:22.732 20:10:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:15:22.732 20:10:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:22.732 20:10:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:22.732 20:10:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:22.732 20:10:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:22.732 20:10:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:22.732 20:10:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:22.732 20:10:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:22.732 20:10:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:22.732 20:10:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:22.732 20:10:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:22.732 20:10:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:22.732 20:10:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.732 20:10:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.732 20:10:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.732 20:10:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:22.732 "name": "Existed_Raid", 00:15:22.732 "uuid": "66c22602-b1d8-4df4-992e-069100c8fbdc", 00:15:22.732 "strip_size_kb": 64, 00:15:22.732 "state": "online", 00:15:22.732 "raid_level": "raid5f", 00:15:22.732 "superblock": false, 00:15:22.732 "num_base_bdevs": 4, 00:15:22.732 "num_base_bdevs_discovered": 4, 00:15:22.732 "num_base_bdevs_operational": 4, 00:15:22.732 "base_bdevs_list": [ 00:15:22.732 { 00:15:22.732 "name": "NewBaseBdev", 00:15:22.732 "uuid": "55989431-4f45-4a79-b7a7-2fedc07b4b53", 00:15:22.732 "is_configured": true, 00:15:22.732 "data_offset": 0, 00:15:22.732 "data_size": 65536 00:15:22.732 }, 00:15:22.732 { 00:15:22.732 "name": "BaseBdev2", 00:15:22.732 "uuid": "88971b98-6e50-492f-822b-018c740960a7", 00:15:22.732 "is_configured": true, 00:15:22.732 "data_offset": 0, 00:15:22.732 "data_size": 65536 00:15:22.732 }, 00:15:22.732 { 00:15:22.732 "name": "BaseBdev3", 00:15:22.732 "uuid": "cf1ec34f-94c3-450c-84c3-9b8e4f58c936", 00:15:22.732 "is_configured": true, 00:15:22.732 "data_offset": 0, 00:15:22.732 "data_size": 65536 00:15:22.732 }, 00:15:22.732 { 00:15:22.732 "name": "BaseBdev4", 00:15:22.732 "uuid": "59329119-7622-4acb-8c8b-cca3e1b0b2ba", 00:15:22.732 "is_configured": true, 00:15:22.732 "data_offset": 0, 00:15:22.732 "data_size": 65536 00:15:22.732 } 00:15:22.732 ] 00:15:22.732 }' 00:15:22.732 20:10:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:22.732 20:10:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.305 20:10:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:15:23.306 20:10:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:23.306 20:10:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:23.306 20:10:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:23.306 20:10:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:23.306 20:10:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:23.306 20:10:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:23.306 20:10:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.306 20:10:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.306 20:10:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:23.306 [2024-12-08 20:10:54.993673] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:23.306 20:10:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.306 20:10:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:23.306 "name": "Existed_Raid", 00:15:23.306 "aliases": [ 00:15:23.306 "66c22602-b1d8-4df4-992e-069100c8fbdc" 00:15:23.306 ], 00:15:23.306 "product_name": "Raid Volume", 00:15:23.306 "block_size": 512, 00:15:23.306 "num_blocks": 196608, 00:15:23.306 "uuid": "66c22602-b1d8-4df4-992e-069100c8fbdc", 00:15:23.306 "assigned_rate_limits": { 00:15:23.306 "rw_ios_per_sec": 0, 00:15:23.306 "rw_mbytes_per_sec": 0, 00:15:23.306 "r_mbytes_per_sec": 0, 00:15:23.306 "w_mbytes_per_sec": 0 00:15:23.306 }, 00:15:23.306 "claimed": false, 00:15:23.306 "zoned": false, 00:15:23.306 "supported_io_types": { 00:15:23.306 "read": true, 00:15:23.306 "write": true, 00:15:23.306 "unmap": false, 00:15:23.306 "flush": false, 00:15:23.306 "reset": true, 00:15:23.306 "nvme_admin": false, 00:15:23.306 "nvme_io": false, 00:15:23.306 "nvme_io_md": false, 00:15:23.306 "write_zeroes": true, 00:15:23.306 "zcopy": false, 00:15:23.306 "get_zone_info": false, 00:15:23.306 "zone_management": false, 00:15:23.306 "zone_append": false, 00:15:23.306 "compare": false, 00:15:23.306 "compare_and_write": false, 00:15:23.306 "abort": false, 00:15:23.306 "seek_hole": false, 00:15:23.306 "seek_data": false, 00:15:23.306 "copy": false, 00:15:23.306 "nvme_iov_md": false 00:15:23.306 }, 00:15:23.306 "driver_specific": { 00:15:23.306 "raid": { 00:15:23.306 "uuid": "66c22602-b1d8-4df4-992e-069100c8fbdc", 00:15:23.306 "strip_size_kb": 64, 00:15:23.306 "state": "online", 00:15:23.306 "raid_level": "raid5f", 00:15:23.306 "superblock": false, 00:15:23.306 "num_base_bdevs": 4, 00:15:23.306 "num_base_bdevs_discovered": 4, 00:15:23.306 "num_base_bdevs_operational": 4, 00:15:23.306 "base_bdevs_list": [ 00:15:23.306 { 00:15:23.306 "name": "NewBaseBdev", 00:15:23.306 "uuid": "55989431-4f45-4a79-b7a7-2fedc07b4b53", 00:15:23.306 "is_configured": true, 00:15:23.306 "data_offset": 0, 00:15:23.306 "data_size": 65536 00:15:23.306 }, 00:15:23.306 { 00:15:23.306 "name": "BaseBdev2", 00:15:23.306 "uuid": "88971b98-6e50-492f-822b-018c740960a7", 00:15:23.306 "is_configured": true, 00:15:23.306 "data_offset": 0, 00:15:23.306 "data_size": 65536 00:15:23.306 }, 00:15:23.306 { 00:15:23.306 "name": "BaseBdev3", 00:15:23.306 "uuid": "cf1ec34f-94c3-450c-84c3-9b8e4f58c936", 00:15:23.306 "is_configured": true, 00:15:23.306 "data_offset": 0, 00:15:23.306 "data_size": 65536 00:15:23.306 }, 00:15:23.306 { 00:15:23.306 "name": "BaseBdev4", 00:15:23.306 "uuid": "59329119-7622-4acb-8c8b-cca3e1b0b2ba", 00:15:23.306 "is_configured": true, 00:15:23.306 "data_offset": 0, 00:15:23.306 "data_size": 65536 00:15:23.306 } 00:15:23.306 ] 00:15:23.306 } 00:15:23.306 } 00:15:23.306 }' 00:15:23.306 20:10:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:23.306 20:10:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:15:23.306 BaseBdev2 00:15:23.306 BaseBdev3 00:15:23.306 BaseBdev4' 00:15:23.306 20:10:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:23.306 20:10:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:23.306 20:10:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:23.306 20:10:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:15:23.306 20:10:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:23.306 20:10:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.306 20:10:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.306 20:10:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.306 20:10:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:23.306 20:10:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:23.306 20:10:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:23.306 20:10:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:23.306 20:10:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.306 20:10:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.306 20:10:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:23.306 20:10:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.306 20:10:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:23.306 20:10:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:23.306 20:10:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:23.306 20:10:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:23.306 20:10:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.306 20:10:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:23.306 20:10:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.306 20:10:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.306 20:10:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:23.306 20:10:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:23.306 20:10:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:23.306 20:10:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:23.306 20:10:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:15:23.306 20:10:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.306 20:10:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.306 20:10:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.568 20:10:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:23.568 20:10:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:23.568 20:10:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:23.568 20:10:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.568 20:10:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.568 [2024-12-08 20:10:55.308900] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:23.568 [2024-12-08 20:10:55.308976] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:23.568 [2024-12-08 20:10:55.309064] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:23.568 [2024-12-08 20:10:55.309400] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:23.568 [2024-12-08 20:10:55.309454] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:15:23.568 20:10:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.568 20:10:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 82447 00:15:23.568 20:10:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 82447 ']' 00:15:23.568 20:10:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # kill -0 82447 00:15:23.568 20:10:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # uname 00:15:23.568 20:10:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:23.568 20:10:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82447 00:15:23.568 killing process with pid 82447 00:15:23.568 20:10:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:23.568 20:10:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:23.568 20:10:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82447' 00:15:23.568 20:10:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@973 -- # kill 82447 00:15:23.568 [2024-12-08 20:10:55.351122] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:23.568 20:10:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@978 -- # wait 82447 00:15:23.827 [2024-12-08 20:10:55.724150] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:25.209 20:10:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:15:25.209 00:15:25.209 real 0m11.049s 00:15:25.209 user 0m17.633s 00:15:25.209 sys 0m1.951s 00:15:25.209 ************************************ 00:15:25.209 END TEST raid5f_state_function_test 00:15:25.209 ************************************ 00:15:25.209 20:10:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:25.209 20:10:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.209 20:10:56 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:15:25.209 20:10:56 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:25.209 20:10:56 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:25.209 20:10:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:25.209 ************************************ 00:15:25.209 START TEST raid5f_state_function_test_sb 00:15:25.209 ************************************ 00:15:25.209 20:10:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 4 true 00:15:25.209 20:10:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:15:25.209 20:10:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:15:25.209 20:10:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:15:25.209 20:10:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:25.209 20:10:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:25.209 20:10:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:25.209 20:10:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:25.209 20:10:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:25.209 20:10:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:25.209 20:10:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:25.209 20:10:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:25.209 20:10:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:25.209 20:10:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:15:25.209 20:10:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:25.209 20:10:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:25.209 20:10:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:15:25.209 20:10:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:25.209 20:10:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:25.209 20:10:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:25.209 20:10:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:25.209 20:10:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:25.209 20:10:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:25.209 20:10:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:25.209 20:10:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:25.209 20:10:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:15:25.209 20:10:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:15:25.209 20:10:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:15:25.209 20:10:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:15:25.209 20:10:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:15:25.209 20:10:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=83113 00:15:25.209 Process raid pid: 83113 00:15:25.209 20:10:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:25.209 20:10:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 83113' 00:15:25.209 20:10:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 83113 00:15:25.209 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:25.209 20:10:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 83113 ']' 00:15:25.209 20:10:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:25.209 20:10:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:25.209 20:10:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:25.209 20:10:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:25.209 20:10:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.209 [2024-12-08 20:10:56.943593] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:15:25.209 [2024-12-08 20:10:56.943724] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:25.209 [2024-12-08 20:10:57.111010] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:25.469 [2024-12-08 20:10:57.220460] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:25.469 [2024-12-08 20:10:57.416703] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:25.469 [2024-12-08 20:10:57.416739] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:26.040 20:10:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:26.040 20:10:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:15:26.040 20:10:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:26.040 20:10:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.040 20:10:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.040 [2024-12-08 20:10:57.758155] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:26.040 [2024-12-08 20:10:57.758209] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:26.040 [2024-12-08 20:10:57.758220] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:26.040 [2024-12-08 20:10:57.758245] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:26.040 [2024-12-08 20:10:57.758251] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:26.040 [2024-12-08 20:10:57.758260] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:26.040 [2024-12-08 20:10:57.758265] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:26.040 [2024-12-08 20:10:57.758273] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:26.040 20:10:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.040 20:10:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:26.040 20:10:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:26.040 20:10:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:26.040 20:10:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:26.040 20:10:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:26.040 20:10:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:26.040 20:10:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:26.040 20:10:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:26.040 20:10:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:26.040 20:10:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:26.040 20:10:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:26.040 20:10:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.040 20:10:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:26.040 20:10:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.040 20:10:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.040 20:10:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:26.040 "name": "Existed_Raid", 00:15:26.040 "uuid": "3dfdad77-0b06-4775-b262-dcdb6d9c86d3", 00:15:26.040 "strip_size_kb": 64, 00:15:26.040 "state": "configuring", 00:15:26.040 "raid_level": "raid5f", 00:15:26.040 "superblock": true, 00:15:26.040 "num_base_bdevs": 4, 00:15:26.040 "num_base_bdevs_discovered": 0, 00:15:26.040 "num_base_bdevs_operational": 4, 00:15:26.040 "base_bdevs_list": [ 00:15:26.040 { 00:15:26.040 "name": "BaseBdev1", 00:15:26.040 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:26.040 "is_configured": false, 00:15:26.040 "data_offset": 0, 00:15:26.040 "data_size": 0 00:15:26.040 }, 00:15:26.040 { 00:15:26.040 "name": "BaseBdev2", 00:15:26.040 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:26.040 "is_configured": false, 00:15:26.040 "data_offset": 0, 00:15:26.040 "data_size": 0 00:15:26.040 }, 00:15:26.040 { 00:15:26.040 "name": "BaseBdev3", 00:15:26.040 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:26.040 "is_configured": false, 00:15:26.040 "data_offset": 0, 00:15:26.040 "data_size": 0 00:15:26.040 }, 00:15:26.040 { 00:15:26.040 "name": "BaseBdev4", 00:15:26.040 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:26.040 "is_configured": false, 00:15:26.040 "data_offset": 0, 00:15:26.040 "data_size": 0 00:15:26.040 } 00:15:26.040 ] 00:15:26.040 }' 00:15:26.040 20:10:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:26.041 20:10:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.302 20:10:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:26.302 20:10:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.302 20:10:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.302 [2024-12-08 20:10:58.197320] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:26.302 [2024-12-08 20:10:58.197399] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:15:26.302 20:10:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.302 20:10:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:26.302 20:10:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.302 20:10:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.302 [2024-12-08 20:10:58.209325] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:26.302 [2024-12-08 20:10:58.209414] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:26.302 [2024-12-08 20:10:58.209442] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:26.302 [2024-12-08 20:10:58.209465] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:26.302 [2024-12-08 20:10:58.209484] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:26.302 [2024-12-08 20:10:58.209505] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:26.302 [2024-12-08 20:10:58.209568] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:26.303 [2024-12-08 20:10:58.209607] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:26.303 20:10:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.303 20:10:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:26.303 20:10:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.303 20:10:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.303 [2024-12-08 20:10:58.255395] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:26.303 BaseBdev1 00:15:26.303 20:10:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.303 20:10:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:26.303 20:10:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:26.303 20:10:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:26.303 20:10:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:26.303 20:10:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:26.303 20:10:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:26.303 20:10:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:26.303 20:10:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.303 20:10:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.303 20:10:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.303 20:10:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:26.303 20:10:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.303 20:10:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.564 [ 00:15:26.564 { 00:15:26.564 "name": "BaseBdev1", 00:15:26.564 "aliases": [ 00:15:26.564 "639bd454-2460-4f67-9786-adb130d3a39c" 00:15:26.564 ], 00:15:26.564 "product_name": "Malloc disk", 00:15:26.564 "block_size": 512, 00:15:26.564 "num_blocks": 65536, 00:15:26.564 "uuid": "639bd454-2460-4f67-9786-adb130d3a39c", 00:15:26.564 "assigned_rate_limits": { 00:15:26.564 "rw_ios_per_sec": 0, 00:15:26.564 "rw_mbytes_per_sec": 0, 00:15:26.564 "r_mbytes_per_sec": 0, 00:15:26.564 "w_mbytes_per_sec": 0 00:15:26.564 }, 00:15:26.564 "claimed": true, 00:15:26.564 "claim_type": "exclusive_write", 00:15:26.564 "zoned": false, 00:15:26.564 "supported_io_types": { 00:15:26.564 "read": true, 00:15:26.564 "write": true, 00:15:26.564 "unmap": true, 00:15:26.564 "flush": true, 00:15:26.564 "reset": true, 00:15:26.564 "nvme_admin": false, 00:15:26.564 "nvme_io": false, 00:15:26.564 "nvme_io_md": false, 00:15:26.564 "write_zeroes": true, 00:15:26.564 "zcopy": true, 00:15:26.564 "get_zone_info": false, 00:15:26.564 "zone_management": false, 00:15:26.564 "zone_append": false, 00:15:26.564 "compare": false, 00:15:26.564 "compare_and_write": false, 00:15:26.564 "abort": true, 00:15:26.564 "seek_hole": false, 00:15:26.564 "seek_data": false, 00:15:26.564 "copy": true, 00:15:26.564 "nvme_iov_md": false 00:15:26.564 }, 00:15:26.564 "memory_domains": [ 00:15:26.564 { 00:15:26.564 "dma_device_id": "system", 00:15:26.564 "dma_device_type": 1 00:15:26.564 }, 00:15:26.564 { 00:15:26.564 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:26.564 "dma_device_type": 2 00:15:26.564 } 00:15:26.564 ], 00:15:26.564 "driver_specific": {} 00:15:26.564 } 00:15:26.564 ] 00:15:26.564 20:10:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.564 20:10:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:26.564 20:10:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:26.564 20:10:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:26.564 20:10:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:26.564 20:10:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:26.564 20:10:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:26.564 20:10:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:26.564 20:10:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:26.564 20:10:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:26.564 20:10:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:26.564 20:10:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:26.564 20:10:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:26.564 20:10:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:26.564 20:10:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.564 20:10:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.564 20:10:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.564 20:10:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:26.564 "name": "Existed_Raid", 00:15:26.564 "uuid": "06c59386-5582-4607-afbb-57b411a1c4b7", 00:15:26.564 "strip_size_kb": 64, 00:15:26.564 "state": "configuring", 00:15:26.564 "raid_level": "raid5f", 00:15:26.564 "superblock": true, 00:15:26.564 "num_base_bdevs": 4, 00:15:26.564 "num_base_bdevs_discovered": 1, 00:15:26.564 "num_base_bdevs_operational": 4, 00:15:26.564 "base_bdevs_list": [ 00:15:26.564 { 00:15:26.564 "name": "BaseBdev1", 00:15:26.564 "uuid": "639bd454-2460-4f67-9786-adb130d3a39c", 00:15:26.564 "is_configured": true, 00:15:26.564 "data_offset": 2048, 00:15:26.564 "data_size": 63488 00:15:26.564 }, 00:15:26.564 { 00:15:26.564 "name": "BaseBdev2", 00:15:26.564 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:26.564 "is_configured": false, 00:15:26.564 "data_offset": 0, 00:15:26.564 "data_size": 0 00:15:26.564 }, 00:15:26.564 { 00:15:26.564 "name": "BaseBdev3", 00:15:26.564 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:26.564 "is_configured": false, 00:15:26.564 "data_offset": 0, 00:15:26.564 "data_size": 0 00:15:26.564 }, 00:15:26.564 { 00:15:26.564 "name": "BaseBdev4", 00:15:26.564 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:26.564 "is_configured": false, 00:15:26.564 "data_offset": 0, 00:15:26.564 "data_size": 0 00:15:26.564 } 00:15:26.564 ] 00:15:26.564 }' 00:15:26.564 20:10:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:26.564 20:10:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.825 20:10:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:26.825 20:10:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.825 20:10:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.825 [2024-12-08 20:10:58.706627] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:26.825 [2024-12-08 20:10:58.706717] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:15:26.825 20:10:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.825 20:10:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:26.825 20:10:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.825 20:10:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.825 [2024-12-08 20:10:58.718669] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:26.825 [2024-12-08 20:10:58.720461] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:26.825 [2024-12-08 20:10:58.720541] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:26.825 [2024-12-08 20:10:58.720556] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:26.825 [2024-12-08 20:10:58.720566] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:26.825 [2024-12-08 20:10:58.720573] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:26.825 [2024-12-08 20:10:58.720580] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:26.825 20:10:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.825 20:10:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:26.825 20:10:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:26.825 20:10:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:26.825 20:10:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:26.825 20:10:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:26.825 20:10:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:26.825 20:10:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:26.825 20:10:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:26.825 20:10:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:26.825 20:10:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:26.825 20:10:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:26.825 20:10:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:26.825 20:10:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:26.825 20:10:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.825 20:10:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.825 20:10:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:26.825 20:10:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.825 20:10:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:26.825 "name": "Existed_Raid", 00:15:26.825 "uuid": "4253c81e-3ab9-4959-9fbe-556815c4d5de", 00:15:26.825 "strip_size_kb": 64, 00:15:26.825 "state": "configuring", 00:15:26.825 "raid_level": "raid5f", 00:15:26.825 "superblock": true, 00:15:26.825 "num_base_bdevs": 4, 00:15:26.825 "num_base_bdevs_discovered": 1, 00:15:26.825 "num_base_bdevs_operational": 4, 00:15:26.825 "base_bdevs_list": [ 00:15:26.825 { 00:15:26.825 "name": "BaseBdev1", 00:15:26.825 "uuid": "639bd454-2460-4f67-9786-adb130d3a39c", 00:15:26.825 "is_configured": true, 00:15:26.825 "data_offset": 2048, 00:15:26.825 "data_size": 63488 00:15:26.825 }, 00:15:26.825 { 00:15:26.825 "name": "BaseBdev2", 00:15:26.825 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:26.825 "is_configured": false, 00:15:26.825 "data_offset": 0, 00:15:26.825 "data_size": 0 00:15:26.825 }, 00:15:26.825 { 00:15:26.825 "name": "BaseBdev3", 00:15:26.825 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:26.825 "is_configured": false, 00:15:26.825 "data_offset": 0, 00:15:26.825 "data_size": 0 00:15:26.825 }, 00:15:26.825 { 00:15:26.825 "name": "BaseBdev4", 00:15:26.825 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:26.825 "is_configured": false, 00:15:26.825 "data_offset": 0, 00:15:26.825 "data_size": 0 00:15:26.825 } 00:15:26.825 ] 00:15:26.825 }' 00:15:26.825 20:10:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:26.825 20:10:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.397 20:10:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:27.397 20:10:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.397 20:10:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.397 [2024-12-08 20:10:59.163882] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:27.397 BaseBdev2 00:15:27.397 20:10:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.397 20:10:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:27.397 20:10:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:27.397 20:10:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:27.397 20:10:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:27.397 20:10:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:27.397 20:10:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:27.397 20:10:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:27.397 20:10:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.397 20:10:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.397 20:10:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.397 20:10:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:27.397 20:10:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.397 20:10:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.397 [ 00:15:27.397 { 00:15:27.397 "name": "BaseBdev2", 00:15:27.397 "aliases": [ 00:15:27.397 "29d556f1-1e83-4bf1-b9dc-316df5d7c8ef" 00:15:27.397 ], 00:15:27.397 "product_name": "Malloc disk", 00:15:27.397 "block_size": 512, 00:15:27.397 "num_blocks": 65536, 00:15:27.397 "uuid": "29d556f1-1e83-4bf1-b9dc-316df5d7c8ef", 00:15:27.397 "assigned_rate_limits": { 00:15:27.397 "rw_ios_per_sec": 0, 00:15:27.397 "rw_mbytes_per_sec": 0, 00:15:27.397 "r_mbytes_per_sec": 0, 00:15:27.397 "w_mbytes_per_sec": 0 00:15:27.397 }, 00:15:27.397 "claimed": true, 00:15:27.397 "claim_type": "exclusive_write", 00:15:27.397 "zoned": false, 00:15:27.397 "supported_io_types": { 00:15:27.397 "read": true, 00:15:27.397 "write": true, 00:15:27.397 "unmap": true, 00:15:27.397 "flush": true, 00:15:27.397 "reset": true, 00:15:27.397 "nvme_admin": false, 00:15:27.397 "nvme_io": false, 00:15:27.397 "nvme_io_md": false, 00:15:27.397 "write_zeroes": true, 00:15:27.397 "zcopy": true, 00:15:27.397 "get_zone_info": false, 00:15:27.397 "zone_management": false, 00:15:27.397 "zone_append": false, 00:15:27.397 "compare": false, 00:15:27.397 "compare_and_write": false, 00:15:27.397 "abort": true, 00:15:27.397 "seek_hole": false, 00:15:27.397 "seek_data": false, 00:15:27.397 "copy": true, 00:15:27.397 "nvme_iov_md": false 00:15:27.397 }, 00:15:27.397 "memory_domains": [ 00:15:27.397 { 00:15:27.397 "dma_device_id": "system", 00:15:27.397 "dma_device_type": 1 00:15:27.397 }, 00:15:27.397 { 00:15:27.397 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:27.397 "dma_device_type": 2 00:15:27.397 } 00:15:27.397 ], 00:15:27.397 "driver_specific": {} 00:15:27.397 } 00:15:27.397 ] 00:15:27.397 20:10:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.397 20:10:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:27.397 20:10:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:27.397 20:10:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:27.397 20:10:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:27.397 20:10:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:27.397 20:10:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:27.397 20:10:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:27.397 20:10:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:27.397 20:10:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:27.397 20:10:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:27.397 20:10:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:27.397 20:10:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:27.397 20:10:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:27.397 20:10:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.397 20:10:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:27.397 20:10:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.397 20:10:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.397 20:10:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.397 20:10:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:27.397 "name": "Existed_Raid", 00:15:27.397 "uuid": "4253c81e-3ab9-4959-9fbe-556815c4d5de", 00:15:27.397 "strip_size_kb": 64, 00:15:27.397 "state": "configuring", 00:15:27.397 "raid_level": "raid5f", 00:15:27.397 "superblock": true, 00:15:27.397 "num_base_bdevs": 4, 00:15:27.397 "num_base_bdevs_discovered": 2, 00:15:27.397 "num_base_bdevs_operational": 4, 00:15:27.397 "base_bdevs_list": [ 00:15:27.397 { 00:15:27.397 "name": "BaseBdev1", 00:15:27.397 "uuid": "639bd454-2460-4f67-9786-adb130d3a39c", 00:15:27.397 "is_configured": true, 00:15:27.397 "data_offset": 2048, 00:15:27.397 "data_size": 63488 00:15:27.397 }, 00:15:27.397 { 00:15:27.397 "name": "BaseBdev2", 00:15:27.397 "uuid": "29d556f1-1e83-4bf1-b9dc-316df5d7c8ef", 00:15:27.397 "is_configured": true, 00:15:27.397 "data_offset": 2048, 00:15:27.397 "data_size": 63488 00:15:27.397 }, 00:15:27.397 { 00:15:27.397 "name": "BaseBdev3", 00:15:27.397 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:27.397 "is_configured": false, 00:15:27.397 "data_offset": 0, 00:15:27.397 "data_size": 0 00:15:27.397 }, 00:15:27.398 { 00:15:27.398 "name": "BaseBdev4", 00:15:27.398 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:27.398 "is_configured": false, 00:15:27.398 "data_offset": 0, 00:15:27.398 "data_size": 0 00:15:27.398 } 00:15:27.398 ] 00:15:27.398 }' 00:15:27.398 20:10:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:27.398 20:10:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.658 20:10:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:27.658 20:10:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.658 20:10:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.918 [2024-12-08 20:10:59.651604] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:27.918 BaseBdev3 00:15:27.918 20:10:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.918 20:10:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:15:27.918 20:10:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:27.918 20:10:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:27.918 20:10:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:27.918 20:10:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:27.918 20:10:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:27.918 20:10:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:27.918 20:10:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.918 20:10:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.918 20:10:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.918 20:10:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:27.918 20:10:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.918 20:10:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.918 [ 00:15:27.918 { 00:15:27.918 "name": "BaseBdev3", 00:15:27.918 "aliases": [ 00:15:27.918 "f9964ed5-2599-44e5-ba4d-e3e05ac85dbf" 00:15:27.918 ], 00:15:27.918 "product_name": "Malloc disk", 00:15:27.918 "block_size": 512, 00:15:27.918 "num_blocks": 65536, 00:15:27.918 "uuid": "f9964ed5-2599-44e5-ba4d-e3e05ac85dbf", 00:15:27.918 "assigned_rate_limits": { 00:15:27.918 "rw_ios_per_sec": 0, 00:15:27.918 "rw_mbytes_per_sec": 0, 00:15:27.918 "r_mbytes_per_sec": 0, 00:15:27.918 "w_mbytes_per_sec": 0 00:15:27.918 }, 00:15:27.918 "claimed": true, 00:15:27.918 "claim_type": "exclusive_write", 00:15:27.918 "zoned": false, 00:15:27.918 "supported_io_types": { 00:15:27.918 "read": true, 00:15:27.918 "write": true, 00:15:27.918 "unmap": true, 00:15:27.918 "flush": true, 00:15:27.918 "reset": true, 00:15:27.918 "nvme_admin": false, 00:15:27.918 "nvme_io": false, 00:15:27.918 "nvme_io_md": false, 00:15:27.918 "write_zeroes": true, 00:15:27.918 "zcopy": true, 00:15:27.918 "get_zone_info": false, 00:15:27.918 "zone_management": false, 00:15:27.918 "zone_append": false, 00:15:27.918 "compare": false, 00:15:27.918 "compare_and_write": false, 00:15:27.918 "abort": true, 00:15:27.918 "seek_hole": false, 00:15:27.918 "seek_data": false, 00:15:27.918 "copy": true, 00:15:27.918 "nvme_iov_md": false 00:15:27.918 }, 00:15:27.918 "memory_domains": [ 00:15:27.918 { 00:15:27.918 "dma_device_id": "system", 00:15:27.918 "dma_device_type": 1 00:15:27.918 }, 00:15:27.918 { 00:15:27.918 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:27.918 "dma_device_type": 2 00:15:27.918 } 00:15:27.918 ], 00:15:27.918 "driver_specific": {} 00:15:27.918 } 00:15:27.918 ] 00:15:27.918 20:10:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.918 20:10:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:27.918 20:10:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:27.918 20:10:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:27.918 20:10:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:27.918 20:10:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:27.918 20:10:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:27.918 20:10:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:27.918 20:10:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:27.918 20:10:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:27.918 20:10:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:27.918 20:10:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:27.918 20:10:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:27.918 20:10:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:27.918 20:10:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.918 20:10:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:27.918 20:10:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.918 20:10:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.918 20:10:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.918 20:10:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:27.918 "name": "Existed_Raid", 00:15:27.918 "uuid": "4253c81e-3ab9-4959-9fbe-556815c4d5de", 00:15:27.918 "strip_size_kb": 64, 00:15:27.918 "state": "configuring", 00:15:27.918 "raid_level": "raid5f", 00:15:27.918 "superblock": true, 00:15:27.918 "num_base_bdevs": 4, 00:15:27.918 "num_base_bdevs_discovered": 3, 00:15:27.918 "num_base_bdevs_operational": 4, 00:15:27.918 "base_bdevs_list": [ 00:15:27.918 { 00:15:27.918 "name": "BaseBdev1", 00:15:27.918 "uuid": "639bd454-2460-4f67-9786-adb130d3a39c", 00:15:27.918 "is_configured": true, 00:15:27.918 "data_offset": 2048, 00:15:27.918 "data_size": 63488 00:15:27.918 }, 00:15:27.918 { 00:15:27.918 "name": "BaseBdev2", 00:15:27.918 "uuid": "29d556f1-1e83-4bf1-b9dc-316df5d7c8ef", 00:15:27.918 "is_configured": true, 00:15:27.918 "data_offset": 2048, 00:15:27.918 "data_size": 63488 00:15:27.918 }, 00:15:27.918 { 00:15:27.918 "name": "BaseBdev3", 00:15:27.918 "uuid": "f9964ed5-2599-44e5-ba4d-e3e05ac85dbf", 00:15:27.918 "is_configured": true, 00:15:27.918 "data_offset": 2048, 00:15:27.918 "data_size": 63488 00:15:27.918 }, 00:15:27.918 { 00:15:27.918 "name": "BaseBdev4", 00:15:27.918 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:27.918 "is_configured": false, 00:15:27.919 "data_offset": 0, 00:15:27.919 "data_size": 0 00:15:27.919 } 00:15:27.919 ] 00:15:27.919 }' 00:15:27.919 20:10:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:27.919 20:10:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.179 20:11:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:15:28.179 20:11:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.179 20:11:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.440 [2024-12-08 20:11:00.163864] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:28.440 [2024-12-08 20:11:00.164287] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:28.440 [2024-12-08 20:11:00.164341] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:28.440 [2024-12-08 20:11:00.164665] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:28.440 BaseBdev4 00:15:28.440 20:11:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.440 20:11:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:15:28.440 20:11:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:15:28.440 20:11:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:28.440 20:11:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:28.440 20:11:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:28.440 20:11:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:28.440 20:11:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:28.440 20:11:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.440 20:11:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.440 [2024-12-08 20:11:00.171997] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:28.440 [2024-12-08 20:11:00.172054] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:15:28.440 [2024-12-08 20:11:00.172364] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:28.440 20:11:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.440 20:11:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:28.440 20:11:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.440 20:11:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.440 [ 00:15:28.440 { 00:15:28.440 "name": "BaseBdev4", 00:15:28.440 "aliases": [ 00:15:28.440 "9a1bfb50-2de0-4aa9-9d49-d5bfa94e0ce3" 00:15:28.440 ], 00:15:28.440 "product_name": "Malloc disk", 00:15:28.440 "block_size": 512, 00:15:28.440 "num_blocks": 65536, 00:15:28.440 "uuid": "9a1bfb50-2de0-4aa9-9d49-d5bfa94e0ce3", 00:15:28.440 "assigned_rate_limits": { 00:15:28.440 "rw_ios_per_sec": 0, 00:15:28.440 "rw_mbytes_per_sec": 0, 00:15:28.440 "r_mbytes_per_sec": 0, 00:15:28.440 "w_mbytes_per_sec": 0 00:15:28.440 }, 00:15:28.440 "claimed": true, 00:15:28.440 "claim_type": "exclusive_write", 00:15:28.440 "zoned": false, 00:15:28.440 "supported_io_types": { 00:15:28.440 "read": true, 00:15:28.440 "write": true, 00:15:28.440 "unmap": true, 00:15:28.440 "flush": true, 00:15:28.440 "reset": true, 00:15:28.440 "nvme_admin": false, 00:15:28.440 "nvme_io": false, 00:15:28.440 "nvme_io_md": false, 00:15:28.440 "write_zeroes": true, 00:15:28.440 "zcopy": true, 00:15:28.440 "get_zone_info": false, 00:15:28.440 "zone_management": false, 00:15:28.440 "zone_append": false, 00:15:28.440 "compare": false, 00:15:28.440 "compare_and_write": false, 00:15:28.440 "abort": true, 00:15:28.440 "seek_hole": false, 00:15:28.440 "seek_data": false, 00:15:28.440 "copy": true, 00:15:28.440 "nvme_iov_md": false 00:15:28.440 }, 00:15:28.440 "memory_domains": [ 00:15:28.440 { 00:15:28.440 "dma_device_id": "system", 00:15:28.440 "dma_device_type": 1 00:15:28.440 }, 00:15:28.440 { 00:15:28.440 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:28.440 "dma_device_type": 2 00:15:28.440 } 00:15:28.440 ], 00:15:28.440 "driver_specific": {} 00:15:28.440 } 00:15:28.440 ] 00:15:28.440 20:11:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.440 20:11:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:28.440 20:11:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:28.440 20:11:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:28.440 20:11:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:15:28.440 20:11:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:28.440 20:11:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:28.440 20:11:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:28.440 20:11:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:28.440 20:11:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:28.440 20:11:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:28.440 20:11:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:28.440 20:11:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:28.440 20:11:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:28.440 20:11:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:28.440 20:11:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:28.440 20:11:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.440 20:11:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.440 20:11:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.440 20:11:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:28.440 "name": "Existed_Raid", 00:15:28.441 "uuid": "4253c81e-3ab9-4959-9fbe-556815c4d5de", 00:15:28.441 "strip_size_kb": 64, 00:15:28.441 "state": "online", 00:15:28.441 "raid_level": "raid5f", 00:15:28.441 "superblock": true, 00:15:28.441 "num_base_bdevs": 4, 00:15:28.441 "num_base_bdevs_discovered": 4, 00:15:28.441 "num_base_bdevs_operational": 4, 00:15:28.441 "base_bdevs_list": [ 00:15:28.441 { 00:15:28.441 "name": "BaseBdev1", 00:15:28.441 "uuid": "639bd454-2460-4f67-9786-adb130d3a39c", 00:15:28.441 "is_configured": true, 00:15:28.441 "data_offset": 2048, 00:15:28.441 "data_size": 63488 00:15:28.441 }, 00:15:28.441 { 00:15:28.441 "name": "BaseBdev2", 00:15:28.441 "uuid": "29d556f1-1e83-4bf1-b9dc-316df5d7c8ef", 00:15:28.441 "is_configured": true, 00:15:28.441 "data_offset": 2048, 00:15:28.441 "data_size": 63488 00:15:28.441 }, 00:15:28.441 { 00:15:28.441 "name": "BaseBdev3", 00:15:28.441 "uuid": "f9964ed5-2599-44e5-ba4d-e3e05ac85dbf", 00:15:28.441 "is_configured": true, 00:15:28.441 "data_offset": 2048, 00:15:28.441 "data_size": 63488 00:15:28.441 }, 00:15:28.441 { 00:15:28.441 "name": "BaseBdev4", 00:15:28.441 "uuid": "9a1bfb50-2de0-4aa9-9d49-d5bfa94e0ce3", 00:15:28.441 "is_configured": true, 00:15:28.441 "data_offset": 2048, 00:15:28.441 "data_size": 63488 00:15:28.441 } 00:15:28.441 ] 00:15:28.441 }' 00:15:28.441 20:11:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:28.441 20:11:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.701 20:11:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:28.701 20:11:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:28.701 20:11:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:28.701 20:11:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:28.701 20:11:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:15:28.701 20:11:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:28.701 20:11:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:28.701 20:11:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:28.701 20:11:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.701 20:11:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.701 [2024-12-08 20:11:00.663658] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:28.960 20:11:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.960 20:11:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:28.960 "name": "Existed_Raid", 00:15:28.960 "aliases": [ 00:15:28.960 "4253c81e-3ab9-4959-9fbe-556815c4d5de" 00:15:28.960 ], 00:15:28.960 "product_name": "Raid Volume", 00:15:28.960 "block_size": 512, 00:15:28.960 "num_blocks": 190464, 00:15:28.960 "uuid": "4253c81e-3ab9-4959-9fbe-556815c4d5de", 00:15:28.960 "assigned_rate_limits": { 00:15:28.960 "rw_ios_per_sec": 0, 00:15:28.960 "rw_mbytes_per_sec": 0, 00:15:28.960 "r_mbytes_per_sec": 0, 00:15:28.960 "w_mbytes_per_sec": 0 00:15:28.960 }, 00:15:28.960 "claimed": false, 00:15:28.960 "zoned": false, 00:15:28.960 "supported_io_types": { 00:15:28.960 "read": true, 00:15:28.960 "write": true, 00:15:28.960 "unmap": false, 00:15:28.960 "flush": false, 00:15:28.960 "reset": true, 00:15:28.960 "nvme_admin": false, 00:15:28.960 "nvme_io": false, 00:15:28.960 "nvme_io_md": false, 00:15:28.960 "write_zeroes": true, 00:15:28.960 "zcopy": false, 00:15:28.960 "get_zone_info": false, 00:15:28.960 "zone_management": false, 00:15:28.960 "zone_append": false, 00:15:28.960 "compare": false, 00:15:28.960 "compare_and_write": false, 00:15:28.960 "abort": false, 00:15:28.960 "seek_hole": false, 00:15:28.960 "seek_data": false, 00:15:28.960 "copy": false, 00:15:28.960 "nvme_iov_md": false 00:15:28.960 }, 00:15:28.960 "driver_specific": { 00:15:28.960 "raid": { 00:15:28.960 "uuid": "4253c81e-3ab9-4959-9fbe-556815c4d5de", 00:15:28.960 "strip_size_kb": 64, 00:15:28.960 "state": "online", 00:15:28.960 "raid_level": "raid5f", 00:15:28.960 "superblock": true, 00:15:28.960 "num_base_bdevs": 4, 00:15:28.960 "num_base_bdevs_discovered": 4, 00:15:28.960 "num_base_bdevs_operational": 4, 00:15:28.960 "base_bdevs_list": [ 00:15:28.960 { 00:15:28.960 "name": "BaseBdev1", 00:15:28.960 "uuid": "639bd454-2460-4f67-9786-adb130d3a39c", 00:15:28.960 "is_configured": true, 00:15:28.960 "data_offset": 2048, 00:15:28.960 "data_size": 63488 00:15:28.960 }, 00:15:28.960 { 00:15:28.960 "name": "BaseBdev2", 00:15:28.960 "uuid": "29d556f1-1e83-4bf1-b9dc-316df5d7c8ef", 00:15:28.960 "is_configured": true, 00:15:28.960 "data_offset": 2048, 00:15:28.960 "data_size": 63488 00:15:28.960 }, 00:15:28.960 { 00:15:28.960 "name": "BaseBdev3", 00:15:28.960 "uuid": "f9964ed5-2599-44e5-ba4d-e3e05ac85dbf", 00:15:28.960 "is_configured": true, 00:15:28.960 "data_offset": 2048, 00:15:28.961 "data_size": 63488 00:15:28.961 }, 00:15:28.961 { 00:15:28.961 "name": "BaseBdev4", 00:15:28.961 "uuid": "9a1bfb50-2de0-4aa9-9d49-d5bfa94e0ce3", 00:15:28.961 "is_configured": true, 00:15:28.961 "data_offset": 2048, 00:15:28.961 "data_size": 63488 00:15:28.961 } 00:15:28.961 ] 00:15:28.961 } 00:15:28.961 } 00:15:28.961 }' 00:15:28.961 20:11:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:28.961 20:11:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:28.961 BaseBdev2 00:15:28.961 BaseBdev3 00:15:28.961 BaseBdev4' 00:15:28.961 20:11:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:28.961 20:11:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:28.961 20:11:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:28.961 20:11:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:28.961 20:11:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:28.961 20:11:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.961 20:11:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.961 20:11:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.961 20:11:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:28.961 20:11:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:28.961 20:11:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:28.961 20:11:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:28.961 20:11:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:28.961 20:11:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.961 20:11:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.961 20:11:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.961 20:11:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:28.961 20:11:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:28.961 20:11:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:28.961 20:11:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:28.961 20:11:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:28.961 20:11:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.961 20:11:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.961 20:11:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.961 20:11:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:28.961 20:11:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:28.961 20:11:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:28.961 20:11:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:15:28.961 20:11:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.961 20:11:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.961 20:11:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:28.961 20:11:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.221 20:11:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:29.221 20:11:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:29.221 20:11:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:29.221 20:11:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.221 20:11:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.221 [2024-12-08 20:11:00.958972] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:29.221 20:11:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.221 20:11:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:29.221 20:11:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:15:29.221 20:11:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:29.221 20:11:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:15:29.221 20:11:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:15:29.221 20:11:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:15:29.221 20:11:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:29.221 20:11:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:29.221 20:11:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:29.221 20:11:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:29.221 20:11:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:29.221 20:11:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:29.221 20:11:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:29.221 20:11:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:29.221 20:11:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:29.221 20:11:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.221 20:11:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:29.221 20:11:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.221 20:11:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.221 20:11:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.221 20:11:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:29.221 "name": "Existed_Raid", 00:15:29.221 "uuid": "4253c81e-3ab9-4959-9fbe-556815c4d5de", 00:15:29.221 "strip_size_kb": 64, 00:15:29.221 "state": "online", 00:15:29.221 "raid_level": "raid5f", 00:15:29.221 "superblock": true, 00:15:29.221 "num_base_bdevs": 4, 00:15:29.221 "num_base_bdevs_discovered": 3, 00:15:29.221 "num_base_bdevs_operational": 3, 00:15:29.221 "base_bdevs_list": [ 00:15:29.221 { 00:15:29.221 "name": null, 00:15:29.221 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:29.221 "is_configured": false, 00:15:29.221 "data_offset": 0, 00:15:29.221 "data_size": 63488 00:15:29.221 }, 00:15:29.221 { 00:15:29.221 "name": "BaseBdev2", 00:15:29.221 "uuid": "29d556f1-1e83-4bf1-b9dc-316df5d7c8ef", 00:15:29.221 "is_configured": true, 00:15:29.222 "data_offset": 2048, 00:15:29.222 "data_size": 63488 00:15:29.222 }, 00:15:29.222 { 00:15:29.222 "name": "BaseBdev3", 00:15:29.222 "uuid": "f9964ed5-2599-44e5-ba4d-e3e05ac85dbf", 00:15:29.222 "is_configured": true, 00:15:29.222 "data_offset": 2048, 00:15:29.222 "data_size": 63488 00:15:29.222 }, 00:15:29.222 { 00:15:29.222 "name": "BaseBdev4", 00:15:29.222 "uuid": "9a1bfb50-2de0-4aa9-9d49-d5bfa94e0ce3", 00:15:29.222 "is_configured": true, 00:15:29.222 "data_offset": 2048, 00:15:29.222 "data_size": 63488 00:15:29.222 } 00:15:29.222 ] 00:15:29.222 }' 00:15:29.222 20:11:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:29.222 20:11:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.482 20:11:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:29.482 20:11:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:29.742 20:11:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:29.742 20:11:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.742 20:11:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.742 20:11:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.742 20:11:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.742 20:11:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:29.742 20:11:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:29.742 20:11:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:29.742 20:11:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.742 20:11:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.742 [2024-12-08 20:11:01.515934] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:29.742 [2024-12-08 20:11:01.516176] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:29.742 [2024-12-08 20:11:01.609759] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:29.742 20:11:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.742 20:11:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:29.742 20:11:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:29.742 20:11:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.742 20:11:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:29.742 20:11:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.742 20:11:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.742 20:11:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.742 20:11:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:29.742 20:11:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:29.742 20:11:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:15:29.742 20:11:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.742 20:11:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.742 [2024-12-08 20:11:01.669697] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:30.002 20:11:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.002 20:11:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:30.002 20:11:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:30.002 20:11:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.002 20:11:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.002 20:11:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.002 20:11:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:30.002 20:11:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.002 20:11:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:30.002 20:11:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:30.002 20:11:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:15:30.002 20:11:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.002 20:11:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.002 [2024-12-08 20:11:01.816849] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:15:30.002 [2024-12-08 20:11:01.816942] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:15:30.002 20:11:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.002 20:11:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:30.002 20:11:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:30.002 20:11:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.002 20:11:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:30.002 20:11:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.002 20:11:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.002 20:11:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.002 20:11:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:30.003 20:11:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:30.003 20:11:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:15:30.003 20:11:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:15:30.003 20:11:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:30.003 20:11:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:30.003 20:11:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.003 20:11:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.263 BaseBdev2 00:15:30.263 20:11:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.263 20:11:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:15:30.263 20:11:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:30.263 20:11:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:30.264 20:11:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:30.264 20:11:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:30.264 20:11:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:30.264 20:11:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:30.264 20:11:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.264 20:11:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.264 20:11:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.264 20:11:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:30.264 20:11:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.264 20:11:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.264 [ 00:15:30.264 { 00:15:30.264 "name": "BaseBdev2", 00:15:30.264 "aliases": [ 00:15:30.264 "6c62f664-e0af-463c-a2c4-4c0367bb959d" 00:15:30.264 ], 00:15:30.264 "product_name": "Malloc disk", 00:15:30.264 "block_size": 512, 00:15:30.264 "num_blocks": 65536, 00:15:30.264 "uuid": "6c62f664-e0af-463c-a2c4-4c0367bb959d", 00:15:30.264 "assigned_rate_limits": { 00:15:30.264 "rw_ios_per_sec": 0, 00:15:30.264 "rw_mbytes_per_sec": 0, 00:15:30.264 "r_mbytes_per_sec": 0, 00:15:30.264 "w_mbytes_per_sec": 0 00:15:30.264 }, 00:15:30.264 "claimed": false, 00:15:30.264 "zoned": false, 00:15:30.264 "supported_io_types": { 00:15:30.264 "read": true, 00:15:30.264 "write": true, 00:15:30.264 "unmap": true, 00:15:30.264 "flush": true, 00:15:30.264 "reset": true, 00:15:30.264 "nvme_admin": false, 00:15:30.264 "nvme_io": false, 00:15:30.264 "nvme_io_md": false, 00:15:30.264 "write_zeroes": true, 00:15:30.264 "zcopy": true, 00:15:30.264 "get_zone_info": false, 00:15:30.264 "zone_management": false, 00:15:30.264 "zone_append": false, 00:15:30.264 "compare": false, 00:15:30.264 "compare_and_write": false, 00:15:30.264 "abort": true, 00:15:30.264 "seek_hole": false, 00:15:30.264 "seek_data": false, 00:15:30.264 "copy": true, 00:15:30.264 "nvme_iov_md": false 00:15:30.264 }, 00:15:30.264 "memory_domains": [ 00:15:30.264 { 00:15:30.264 "dma_device_id": "system", 00:15:30.264 "dma_device_type": 1 00:15:30.264 }, 00:15:30.264 { 00:15:30.264 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:30.264 "dma_device_type": 2 00:15:30.264 } 00:15:30.264 ], 00:15:30.264 "driver_specific": {} 00:15:30.264 } 00:15:30.264 ] 00:15:30.264 20:11:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.264 20:11:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:30.264 20:11:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:30.264 20:11:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:30.264 20:11:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:30.264 20:11:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.264 20:11:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.264 BaseBdev3 00:15:30.264 20:11:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.264 20:11:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:15:30.264 20:11:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:30.264 20:11:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:30.264 20:11:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:30.264 20:11:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:30.264 20:11:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:30.264 20:11:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:30.264 20:11:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.264 20:11:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.264 20:11:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.264 20:11:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:30.264 20:11:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.264 20:11:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.264 [ 00:15:30.264 { 00:15:30.264 "name": "BaseBdev3", 00:15:30.264 "aliases": [ 00:15:30.264 "5d2e2694-7565-4ce9-b664-c5c1cb553f9a" 00:15:30.264 ], 00:15:30.264 "product_name": "Malloc disk", 00:15:30.264 "block_size": 512, 00:15:30.264 "num_blocks": 65536, 00:15:30.264 "uuid": "5d2e2694-7565-4ce9-b664-c5c1cb553f9a", 00:15:30.264 "assigned_rate_limits": { 00:15:30.264 "rw_ios_per_sec": 0, 00:15:30.264 "rw_mbytes_per_sec": 0, 00:15:30.264 "r_mbytes_per_sec": 0, 00:15:30.264 "w_mbytes_per_sec": 0 00:15:30.264 }, 00:15:30.264 "claimed": false, 00:15:30.264 "zoned": false, 00:15:30.264 "supported_io_types": { 00:15:30.264 "read": true, 00:15:30.264 "write": true, 00:15:30.264 "unmap": true, 00:15:30.264 "flush": true, 00:15:30.264 "reset": true, 00:15:30.264 "nvme_admin": false, 00:15:30.264 "nvme_io": false, 00:15:30.264 "nvme_io_md": false, 00:15:30.264 "write_zeroes": true, 00:15:30.264 "zcopy": true, 00:15:30.264 "get_zone_info": false, 00:15:30.264 "zone_management": false, 00:15:30.264 "zone_append": false, 00:15:30.264 "compare": false, 00:15:30.264 "compare_and_write": false, 00:15:30.264 "abort": true, 00:15:30.264 "seek_hole": false, 00:15:30.264 "seek_data": false, 00:15:30.264 "copy": true, 00:15:30.264 "nvme_iov_md": false 00:15:30.264 }, 00:15:30.264 "memory_domains": [ 00:15:30.264 { 00:15:30.264 "dma_device_id": "system", 00:15:30.264 "dma_device_type": 1 00:15:30.264 }, 00:15:30.264 { 00:15:30.264 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:30.264 "dma_device_type": 2 00:15:30.264 } 00:15:30.264 ], 00:15:30.264 "driver_specific": {} 00:15:30.264 } 00:15:30.264 ] 00:15:30.264 20:11:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.264 20:11:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:30.264 20:11:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:30.264 20:11:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:30.264 20:11:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:15:30.264 20:11:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.264 20:11:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.264 BaseBdev4 00:15:30.264 20:11:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.264 20:11:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:15:30.264 20:11:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:15:30.264 20:11:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:30.264 20:11:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:30.264 20:11:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:30.264 20:11:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:30.264 20:11:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:30.264 20:11:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.264 20:11:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.264 20:11:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.264 20:11:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:30.264 20:11:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.264 20:11:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.264 [ 00:15:30.264 { 00:15:30.264 "name": "BaseBdev4", 00:15:30.264 "aliases": [ 00:15:30.264 "00978b4c-a5bb-4f0e-8b49-db5b4621ad74" 00:15:30.264 ], 00:15:30.264 "product_name": "Malloc disk", 00:15:30.264 "block_size": 512, 00:15:30.264 "num_blocks": 65536, 00:15:30.264 "uuid": "00978b4c-a5bb-4f0e-8b49-db5b4621ad74", 00:15:30.264 "assigned_rate_limits": { 00:15:30.264 "rw_ios_per_sec": 0, 00:15:30.264 "rw_mbytes_per_sec": 0, 00:15:30.264 "r_mbytes_per_sec": 0, 00:15:30.264 "w_mbytes_per_sec": 0 00:15:30.264 }, 00:15:30.264 "claimed": false, 00:15:30.264 "zoned": false, 00:15:30.264 "supported_io_types": { 00:15:30.264 "read": true, 00:15:30.264 "write": true, 00:15:30.264 "unmap": true, 00:15:30.264 "flush": true, 00:15:30.264 "reset": true, 00:15:30.264 "nvme_admin": false, 00:15:30.265 "nvme_io": false, 00:15:30.265 "nvme_io_md": false, 00:15:30.265 "write_zeroes": true, 00:15:30.265 "zcopy": true, 00:15:30.265 "get_zone_info": false, 00:15:30.265 "zone_management": false, 00:15:30.265 "zone_append": false, 00:15:30.265 "compare": false, 00:15:30.265 "compare_and_write": false, 00:15:30.265 "abort": true, 00:15:30.265 "seek_hole": false, 00:15:30.265 "seek_data": false, 00:15:30.265 "copy": true, 00:15:30.265 "nvme_iov_md": false 00:15:30.265 }, 00:15:30.265 "memory_domains": [ 00:15:30.265 { 00:15:30.265 "dma_device_id": "system", 00:15:30.265 "dma_device_type": 1 00:15:30.265 }, 00:15:30.265 { 00:15:30.265 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:30.265 "dma_device_type": 2 00:15:30.265 } 00:15:30.265 ], 00:15:30.265 "driver_specific": {} 00:15:30.265 } 00:15:30.265 ] 00:15:30.265 20:11:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.265 20:11:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:30.265 20:11:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:30.265 20:11:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:30.265 20:11:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:30.265 20:11:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.265 20:11:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.265 [2024-12-08 20:11:02.193641] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:30.265 [2024-12-08 20:11:02.193733] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:30.265 [2024-12-08 20:11:02.193773] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:30.265 [2024-12-08 20:11:02.195558] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:30.265 [2024-12-08 20:11:02.195661] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:30.265 20:11:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.265 20:11:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:30.265 20:11:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:30.265 20:11:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:30.265 20:11:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:30.265 20:11:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:30.265 20:11:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:30.265 20:11:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:30.265 20:11:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:30.265 20:11:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:30.265 20:11:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:30.265 20:11:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.265 20:11:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:30.265 20:11:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.265 20:11:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.265 20:11:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.525 20:11:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:30.525 "name": "Existed_Raid", 00:15:30.525 "uuid": "7efa9c4d-be28-4188-ba90-e289fe94230e", 00:15:30.525 "strip_size_kb": 64, 00:15:30.525 "state": "configuring", 00:15:30.525 "raid_level": "raid5f", 00:15:30.525 "superblock": true, 00:15:30.525 "num_base_bdevs": 4, 00:15:30.525 "num_base_bdevs_discovered": 3, 00:15:30.525 "num_base_bdevs_operational": 4, 00:15:30.525 "base_bdevs_list": [ 00:15:30.525 { 00:15:30.525 "name": "BaseBdev1", 00:15:30.525 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:30.525 "is_configured": false, 00:15:30.525 "data_offset": 0, 00:15:30.525 "data_size": 0 00:15:30.525 }, 00:15:30.525 { 00:15:30.525 "name": "BaseBdev2", 00:15:30.525 "uuid": "6c62f664-e0af-463c-a2c4-4c0367bb959d", 00:15:30.525 "is_configured": true, 00:15:30.525 "data_offset": 2048, 00:15:30.525 "data_size": 63488 00:15:30.525 }, 00:15:30.525 { 00:15:30.525 "name": "BaseBdev3", 00:15:30.525 "uuid": "5d2e2694-7565-4ce9-b664-c5c1cb553f9a", 00:15:30.525 "is_configured": true, 00:15:30.525 "data_offset": 2048, 00:15:30.525 "data_size": 63488 00:15:30.525 }, 00:15:30.525 { 00:15:30.525 "name": "BaseBdev4", 00:15:30.525 "uuid": "00978b4c-a5bb-4f0e-8b49-db5b4621ad74", 00:15:30.525 "is_configured": true, 00:15:30.525 "data_offset": 2048, 00:15:30.525 "data_size": 63488 00:15:30.525 } 00:15:30.525 ] 00:15:30.525 }' 00:15:30.525 20:11:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:30.525 20:11:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.784 20:11:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:30.784 20:11:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.784 20:11:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.784 [2024-12-08 20:11:02.604953] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:30.784 20:11:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.784 20:11:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:30.784 20:11:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:30.784 20:11:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:30.784 20:11:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:30.784 20:11:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:30.784 20:11:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:30.784 20:11:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:30.784 20:11:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:30.784 20:11:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:30.784 20:11:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:30.784 20:11:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.784 20:11:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.784 20:11:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:30.784 20:11:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.784 20:11:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.784 20:11:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:30.784 "name": "Existed_Raid", 00:15:30.784 "uuid": "7efa9c4d-be28-4188-ba90-e289fe94230e", 00:15:30.784 "strip_size_kb": 64, 00:15:30.784 "state": "configuring", 00:15:30.784 "raid_level": "raid5f", 00:15:30.784 "superblock": true, 00:15:30.784 "num_base_bdevs": 4, 00:15:30.784 "num_base_bdevs_discovered": 2, 00:15:30.784 "num_base_bdevs_operational": 4, 00:15:30.784 "base_bdevs_list": [ 00:15:30.784 { 00:15:30.784 "name": "BaseBdev1", 00:15:30.784 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:30.784 "is_configured": false, 00:15:30.784 "data_offset": 0, 00:15:30.784 "data_size": 0 00:15:30.784 }, 00:15:30.784 { 00:15:30.784 "name": null, 00:15:30.784 "uuid": "6c62f664-e0af-463c-a2c4-4c0367bb959d", 00:15:30.784 "is_configured": false, 00:15:30.784 "data_offset": 0, 00:15:30.784 "data_size": 63488 00:15:30.784 }, 00:15:30.785 { 00:15:30.785 "name": "BaseBdev3", 00:15:30.785 "uuid": "5d2e2694-7565-4ce9-b664-c5c1cb553f9a", 00:15:30.785 "is_configured": true, 00:15:30.785 "data_offset": 2048, 00:15:30.785 "data_size": 63488 00:15:30.785 }, 00:15:30.785 { 00:15:30.785 "name": "BaseBdev4", 00:15:30.785 "uuid": "00978b4c-a5bb-4f0e-8b49-db5b4621ad74", 00:15:30.785 "is_configured": true, 00:15:30.785 "data_offset": 2048, 00:15:30.785 "data_size": 63488 00:15:30.785 } 00:15:30.785 ] 00:15:30.785 }' 00:15:30.785 20:11:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:30.785 20:11:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.355 20:11:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:31.355 20:11:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:31.355 20:11:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.355 20:11:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.355 20:11:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.355 20:11:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:15:31.355 20:11:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:31.355 20:11:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.355 20:11:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.355 [2024-12-08 20:11:03.098463] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:31.355 BaseBdev1 00:15:31.355 20:11:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.355 20:11:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:15:31.355 20:11:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:31.355 20:11:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:31.355 20:11:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:31.355 20:11:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:31.355 20:11:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:31.355 20:11:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:31.355 20:11:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.355 20:11:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.355 20:11:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.355 20:11:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:31.355 20:11:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.355 20:11:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.355 [ 00:15:31.355 { 00:15:31.355 "name": "BaseBdev1", 00:15:31.355 "aliases": [ 00:15:31.355 "401caa8f-dec3-4d5c-9940-045e82bd01e2" 00:15:31.355 ], 00:15:31.355 "product_name": "Malloc disk", 00:15:31.355 "block_size": 512, 00:15:31.355 "num_blocks": 65536, 00:15:31.355 "uuid": "401caa8f-dec3-4d5c-9940-045e82bd01e2", 00:15:31.355 "assigned_rate_limits": { 00:15:31.355 "rw_ios_per_sec": 0, 00:15:31.355 "rw_mbytes_per_sec": 0, 00:15:31.355 "r_mbytes_per_sec": 0, 00:15:31.355 "w_mbytes_per_sec": 0 00:15:31.355 }, 00:15:31.355 "claimed": true, 00:15:31.355 "claim_type": "exclusive_write", 00:15:31.355 "zoned": false, 00:15:31.355 "supported_io_types": { 00:15:31.355 "read": true, 00:15:31.355 "write": true, 00:15:31.355 "unmap": true, 00:15:31.355 "flush": true, 00:15:31.355 "reset": true, 00:15:31.355 "nvme_admin": false, 00:15:31.355 "nvme_io": false, 00:15:31.355 "nvme_io_md": false, 00:15:31.355 "write_zeroes": true, 00:15:31.355 "zcopy": true, 00:15:31.355 "get_zone_info": false, 00:15:31.355 "zone_management": false, 00:15:31.355 "zone_append": false, 00:15:31.355 "compare": false, 00:15:31.355 "compare_and_write": false, 00:15:31.355 "abort": true, 00:15:31.355 "seek_hole": false, 00:15:31.355 "seek_data": false, 00:15:31.355 "copy": true, 00:15:31.355 "nvme_iov_md": false 00:15:31.355 }, 00:15:31.355 "memory_domains": [ 00:15:31.355 { 00:15:31.355 "dma_device_id": "system", 00:15:31.355 "dma_device_type": 1 00:15:31.355 }, 00:15:31.355 { 00:15:31.355 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:31.355 "dma_device_type": 2 00:15:31.355 } 00:15:31.355 ], 00:15:31.355 "driver_specific": {} 00:15:31.355 } 00:15:31.355 ] 00:15:31.355 20:11:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.355 20:11:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:31.355 20:11:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:31.355 20:11:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:31.355 20:11:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:31.355 20:11:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:31.355 20:11:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:31.355 20:11:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:31.355 20:11:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:31.355 20:11:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:31.355 20:11:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:31.355 20:11:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:31.355 20:11:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:31.355 20:11:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:31.355 20:11:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.355 20:11:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.355 20:11:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.355 20:11:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:31.355 "name": "Existed_Raid", 00:15:31.355 "uuid": "7efa9c4d-be28-4188-ba90-e289fe94230e", 00:15:31.355 "strip_size_kb": 64, 00:15:31.355 "state": "configuring", 00:15:31.355 "raid_level": "raid5f", 00:15:31.355 "superblock": true, 00:15:31.355 "num_base_bdevs": 4, 00:15:31.355 "num_base_bdevs_discovered": 3, 00:15:31.355 "num_base_bdevs_operational": 4, 00:15:31.355 "base_bdevs_list": [ 00:15:31.355 { 00:15:31.355 "name": "BaseBdev1", 00:15:31.355 "uuid": "401caa8f-dec3-4d5c-9940-045e82bd01e2", 00:15:31.355 "is_configured": true, 00:15:31.355 "data_offset": 2048, 00:15:31.355 "data_size": 63488 00:15:31.355 }, 00:15:31.355 { 00:15:31.355 "name": null, 00:15:31.355 "uuid": "6c62f664-e0af-463c-a2c4-4c0367bb959d", 00:15:31.355 "is_configured": false, 00:15:31.355 "data_offset": 0, 00:15:31.355 "data_size": 63488 00:15:31.355 }, 00:15:31.355 { 00:15:31.355 "name": "BaseBdev3", 00:15:31.355 "uuid": "5d2e2694-7565-4ce9-b664-c5c1cb553f9a", 00:15:31.355 "is_configured": true, 00:15:31.355 "data_offset": 2048, 00:15:31.355 "data_size": 63488 00:15:31.355 }, 00:15:31.355 { 00:15:31.355 "name": "BaseBdev4", 00:15:31.355 "uuid": "00978b4c-a5bb-4f0e-8b49-db5b4621ad74", 00:15:31.355 "is_configured": true, 00:15:31.355 "data_offset": 2048, 00:15:31.355 "data_size": 63488 00:15:31.355 } 00:15:31.355 ] 00:15:31.355 }' 00:15:31.355 20:11:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:31.355 20:11:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.615 20:11:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:31.615 20:11:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:31.615 20:11:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.615 20:11:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.615 20:11:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.615 20:11:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:15:31.615 20:11:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:15:31.615 20:11:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.615 20:11:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.615 [2024-12-08 20:11:03.585685] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:31.876 20:11:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.876 20:11:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:31.876 20:11:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:31.876 20:11:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:31.876 20:11:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:31.876 20:11:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:31.876 20:11:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:31.876 20:11:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:31.876 20:11:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:31.876 20:11:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:31.876 20:11:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:31.876 20:11:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:31.876 20:11:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:31.876 20:11:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.876 20:11:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.876 20:11:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.876 20:11:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:31.876 "name": "Existed_Raid", 00:15:31.876 "uuid": "7efa9c4d-be28-4188-ba90-e289fe94230e", 00:15:31.876 "strip_size_kb": 64, 00:15:31.876 "state": "configuring", 00:15:31.876 "raid_level": "raid5f", 00:15:31.876 "superblock": true, 00:15:31.876 "num_base_bdevs": 4, 00:15:31.876 "num_base_bdevs_discovered": 2, 00:15:31.876 "num_base_bdevs_operational": 4, 00:15:31.876 "base_bdevs_list": [ 00:15:31.876 { 00:15:31.876 "name": "BaseBdev1", 00:15:31.876 "uuid": "401caa8f-dec3-4d5c-9940-045e82bd01e2", 00:15:31.876 "is_configured": true, 00:15:31.876 "data_offset": 2048, 00:15:31.876 "data_size": 63488 00:15:31.876 }, 00:15:31.876 { 00:15:31.876 "name": null, 00:15:31.876 "uuid": "6c62f664-e0af-463c-a2c4-4c0367bb959d", 00:15:31.876 "is_configured": false, 00:15:31.876 "data_offset": 0, 00:15:31.876 "data_size": 63488 00:15:31.876 }, 00:15:31.876 { 00:15:31.876 "name": null, 00:15:31.876 "uuid": "5d2e2694-7565-4ce9-b664-c5c1cb553f9a", 00:15:31.876 "is_configured": false, 00:15:31.876 "data_offset": 0, 00:15:31.876 "data_size": 63488 00:15:31.876 }, 00:15:31.876 { 00:15:31.876 "name": "BaseBdev4", 00:15:31.876 "uuid": "00978b4c-a5bb-4f0e-8b49-db5b4621ad74", 00:15:31.876 "is_configured": true, 00:15:31.876 "data_offset": 2048, 00:15:31.876 "data_size": 63488 00:15:31.876 } 00:15:31.876 ] 00:15:31.876 }' 00:15:31.876 20:11:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:31.876 20:11:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.136 20:11:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:32.136 20:11:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.136 20:11:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.136 20:11:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:32.136 20:11:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.136 20:11:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:15:32.136 20:11:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:32.136 20:11:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.136 20:11:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.136 [2024-12-08 20:11:04.092878] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:32.136 20:11:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.136 20:11:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:32.136 20:11:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:32.136 20:11:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:32.136 20:11:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:32.136 20:11:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:32.136 20:11:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:32.136 20:11:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:32.136 20:11:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:32.136 20:11:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:32.136 20:11:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:32.136 20:11:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:32.136 20:11:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:32.136 20:11:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.136 20:11:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.396 20:11:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.396 20:11:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:32.396 "name": "Existed_Raid", 00:15:32.396 "uuid": "7efa9c4d-be28-4188-ba90-e289fe94230e", 00:15:32.396 "strip_size_kb": 64, 00:15:32.396 "state": "configuring", 00:15:32.396 "raid_level": "raid5f", 00:15:32.396 "superblock": true, 00:15:32.396 "num_base_bdevs": 4, 00:15:32.396 "num_base_bdevs_discovered": 3, 00:15:32.396 "num_base_bdevs_operational": 4, 00:15:32.396 "base_bdevs_list": [ 00:15:32.396 { 00:15:32.396 "name": "BaseBdev1", 00:15:32.396 "uuid": "401caa8f-dec3-4d5c-9940-045e82bd01e2", 00:15:32.396 "is_configured": true, 00:15:32.396 "data_offset": 2048, 00:15:32.396 "data_size": 63488 00:15:32.396 }, 00:15:32.396 { 00:15:32.396 "name": null, 00:15:32.396 "uuid": "6c62f664-e0af-463c-a2c4-4c0367bb959d", 00:15:32.396 "is_configured": false, 00:15:32.396 "data_offset": 0, 00:15:32.396 "data_size": 63488 00:15:32.396 }, 00:15:32.396 { 00:15:32.396 "name": "BaseBdev3", 00:15:32.396 "uuid": "5d2e2694-7565-4ce9-b664-c5c1cb553f9a", 00:15:32.396 "is_configured": true, 00:15:32.396 "data_offset": 2048, 00:15:32.396 "data_size": 63488 00:15:32.396 }, 00:15:32.396 { 00:15:32.396 "name": "BaseBdev4", 00:15:32.396 "uuid": "00978b4c-a5bb-4f0e-8b49-db5b4621ad74", 00:15:32.396 "is_configured": true, 00:15:32.396 "data_offset": 2048, 00:15:32.396 "data_size": 63488 00:15:32.396 } 00:15:32.396 ] 00:15:32.396 }' 00:15:32.396 20:11:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:32.396 20:11:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.656 20:11:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:32.656 20:11:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.656 20:11:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.656 20:11:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:32.656 20:11:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.656 20:11:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:15:32.656 20:11:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:32.656 20:11:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.656 20:11:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.656 [2024-12-08 20:11:04.588076] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:32.916 20:11:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.916 20:11:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:32.916 20:11:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:32.916 20:11:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:32.916 20:11:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:32.916 20:11:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:32.916 20:11:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:32.916 20:11:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:32.916 20:11:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:32.916 20:11:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:32.916 20:11:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:32.916 20:11:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:32.916 20:11:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:32.916 20:11:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.916 20:11:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.916 20:11:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.916 20:11:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:32.916 "name": "Existed_Raid", 00:15:32.916 "uuid": "7efa9c4d-be28-4188-ba90-e289fe94230e", 00:15:32.916 "strip_size_kb": 64, 00:15:32.916 "state": "configuring", 00:15:32.916 "raid_level": "raid5f", 00:15:32.916 "superblock": true, 00:15:32.916 "num_base_bdevs": 4, 00:15:32.916 "num_base_bdevs_discovered": 2, 00:15:32.916 "num_base_bdevs_operational": 4, 00:15:32.916 "base_bdevs_list": [ 00:15:32.916 { 00:15:32.916 "name": null, 00:15:32.916 "uuid": "401caa8f-dec3-4d5c-9940-045e82bd01e2", 00:15:32.916 "is_configured": false, 00:15:32.916 "data_offset": 0, 00:15:32.916 "data_size": 63488 00:15:32.916 }, 00:15:32.916 { 00:15:32.916 "name": null, 00:15:32.916 "uuid": "6c62f664-e0af-463c-a2c4-4c0367bb959d", 00:15:32.916 "is_configured": false, 00:15:32.916 "data_offset": 0, 00:15:32.916 "data_size": 63488 00:15:32.916 }, 00:15:32.916 { 00:15:32.916 "name": "BaseBdev3", 00:15:32.916 "uuid": "5d2e2694-7565-4ce9-b664-c5c1cb553f9a", 00:15:32.916 "is_configured": true, 00:15:32.916 "data_offset": 2048, 00:15:32.916 "data_size": 63488 00:15:32.916 }, 00:15:32.916 { 00:15:32.916 "name": "BaseBdev4", 00:15:32.916 "uuid": "00978b4c-a5bb-4f0e-8b49-db5b4621ad74", 00:15:32.916 "is_configured": true, 00:15:32.916 "data_offset": 2048, 00:15:32.916 "data_size": 63488 00:15:32.916 } 00:15:32.916 ] 00:15:32.916 }' 00:15:32.916 20:11:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:32.916 20:11:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.177 20:11:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:33.177 20:11:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:33.177 20:11:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.177 20:11:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.177 20:11:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.177 20:11:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:15:33.177 20:11:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:33.177 20:11:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.177 20:11:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.177 [2024-12-08 20:11:05.039606] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:33.177 20:11:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.177 20:11:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:33.177 20:11:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:33.177 20:11:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:33.177 20:11:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:33.177 20:11:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:33.177 20:11:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:33.177 20:11:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:33.177 20:11:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:33.177 20:11:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:33.177 20:11:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:33.177 20:11:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:33.177 20:11:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.177 20:11:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.177 20:11:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:33.177 20:11:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.177 20:11:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:33.177 "name": "Existed_Raid", 00:15:33.177 "uuid": "7efa9c4d-be28-4188-ba90-e289fe94230e", 00:15:33.177 "strip_size_kb": 64, 00:15:33.177 "state": "configuring", 00:15:33.177 "raid_level": "raid5f", 00:15:33.177 "superblock": true, 00:15:33.177 "num_base_bdevs": 4, 00:15:33.177 "num_base_bdevs_discovered": 3, 00:15:33.177 "num_base_bdevs_operational": 4, 00:15:33.177 "base_bdevs_list": [ 00:15:33.177 { 00:15:33.177 "name": null, 00:15:33.177 "uuid": "401caa8f-dec3-4d5c-9940-045e82bd01e2", 00:15:33.177 "is_configured": false, 00:15:33.177 "data_offset": 0, 00:15:33.177 "data_size": 63488 00:15:33.177 }, 00:15:33.177 { 00:15:33.177 "name": "BaseBdev2", 00:15:33.177 "uuid": "6c62f664-e0af-463c-a2c4-4c0367bb959d", 00:15:33.177 "is_configured": true, 00:15:33.177 "data_offset": 2048, 00:15:33.177 "data_size": 63488 00:15:33.177 }, 00:15:33.177 { 00:15:33.177 "name": "BaseBdev3", 00:15:33.177 "uuid": "5d2e2694-7565-4ce9-b664-c5c1cb553f9a", 00:15:33.177 "is_configured": true, 00:15:33.177 "data_offset": 2048, 00:15:33.177 "data_size": 63488 00:15:33.177 }, 00:15:33.177 { 00:15:33.177 "name": "BaseBdev4", 00:15:33.177 "uuid": "00978b4c-a5bb-4f0e-8b49-db5b4621ad74", 00:15:33.177 "is_configured": true, 00:15:33.177 "data_offset": 2048, 00:15:33.177 "data_size": 63488 00:15:33.177 } 00:15:33.177 ] 00:15:33.177 }' 00:15:33.177 20:11:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:33.177 20:11:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.746 20:11:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:33.746 20:11:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:33.746 20:11:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.746 20:11:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.746 20:11:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.746 20:11:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:15:33.746 20:11:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:33.746 20:11:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:33.746 20:11:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.746 20:11:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.746 20:11:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.746 20:11:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 401caa8f-dec3-4d5c-9940-045e82bd01e2 00:15:33.746 20:11:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.746 20:11:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.746 [2024-12-08 20:11:05.565626] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:33.746 [2024-12-08 20:11:05.565957] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:33.746 [2024-12-08 20:11:05.566014] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:33.746 [2024-12-08 20:11:05.566318] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:15:33.746 NewBaseBdev 00:15:33.746 20:11:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.746 20:11:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:15:33.746 20:11:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:15:33.746 20:11:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:33.746 20:11:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:33.746 20:11:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:33.746 20:11:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:33.746 20:11:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:33.746 20:11:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.746 20:11:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.746 [2024-12-08 20:11:05.573472] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:33.746 [2024-12-08 20:11:05.573527] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:15:33.746 [2024-12-08 20:11:05.573847] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:33.746 20:11:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.746 20:11:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:33.746 20:11:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.746 20:11:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.746 [ 00:15:33.746 { 00:15:33.746 "name": "NewBaseBdev", 00:15:33.746 "aliases": [ 00:15:33.746 "401caa8f-dec3-4d5c-9940-045e82bd01e2" 00:15:33.746 ], 00:15:33.746 "product_name": "Malloc disk", 00:15:33.746 "block_size": 512, 00:15:33.746 "num_blocks": 65536, 00:15:33.746 "uuid": "401caa8f-dec3-4d5c-9940-045e82bd01e2", 00:15:33.746 "assigned_rate_limits": { 00:15:33.746 "rw_ios_per_sec": 0, 00:15:33.746 "rw_mbytes_per_sec": 0, 00:15:33.746 "r_mbytes_per_sec": 0, 00:15:33.746 "w_mbytes_per_sec": 0 00:15:33.746 }, 00:15:33.746 "claimed": true, 00:15:33.746 "claim_type": "exclusive_write", 00:15:33.746 "zoned": false, 00:15:33.746 "supported_io_types": { 00:15:33.746 "read": true, 00:15:33.746 "write": true, 00:15:33.746 "unmap": true, 00:15:33.746 "flush": true, 00:15:33.746 "reset": true, 00:15:33.746 "nvme_admin": false, 00:15:33.746 "nvme_io": false, 00:15:33.746 "nvme_io_md": false, 00:15:33.746 "write_zeroes": true, 00:15:33.746 "zcopy": true, 00:15:33.746 "get_zone_info": false, 00:15:33.746 "zone_management": false, 00:15:33.746 "zone_append": false, 00:15:33.746 "compare": false, 00:15:33.746 "compare_and_write": false, 00:15:33.746 "abort": true, 00:15:33.746 "seek_hole": false, 00:15:33.746 "seek_data": false, 00:15:33.746 "copy": true, 00:15:33.746 "nvme_iov_md": false 00:15:33.746 }, 00:15:33.746 "memory_domains": [ 00:15:33.746 { 00:15:33.746 "dma_device_id": "system", 00:15:33.746 "dma_device_type": 1 00:15:33.746 }, 00:15:33.746 { 00:15:33.746 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:33.746 "dma_device_type": 2 00:15:33.746 } 00:15:33.746 ], 00:15:33.746 "driver_specific": {} 00:15:33.746 } 00:15:33.746 ] 00:15:33.746 20:11:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.746 20:11:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:33.746 20:11:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:15:33.746 20:11:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:33.746 20:11:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:33.746 20:11:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:33.746 20:11:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:33.746 20:11:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:33.746 20:11:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:33.746 20:11:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:33.747 20:11:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:33.747 20:11:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:33.747 20:11:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:33.747 20:11:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:33.747 20:11:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.747 20:11:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.747 20:11:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.747 20:11:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:33.747 "name": "Existed_Raid", 00:15:33.747 "uuid": "7efa9c4d-be28-4188-ba90-e289fe94230e", 00:15:33.747 "strip_size_kb": 64, 00:15:33.747 "state": "online", 00:15:33.747 "raid_level": "raid5f", 00:15:33.747 "superblock": true, 00:15:33.747 "num_base_bdevs": 4, 00:15:33.747 "num_base_bdevs_discovered": 4, 00:15:33.747 "num_base_bdevs_operational": 4, 00:15:33.747 "base_bdevs_list": [ 00:15:33.747 { 00:15:33.747 "name": "NewBaseBdev", 00:15:33.747 "uuid": "401caa8f-dec3-4d5c-9940-045e82bd01e2", 00:15:33.747 "is_configured": true, 00:15:33.747 "data_offset": 2048, 00:15:33.747 "data_size": 63488 00:15:33.747 }, 00:15:33.747 { 00:15:33.747 "name": "BaseBdev2", 00:15:33.747 "uuid": "6c62f664-e0af-463c-a2c4-4c0367bb959d", 00:15:33.747 "is_configured": true, 00:15:33.747 "data_offset": 2048, 00:15:33.747 "data_size": 63488 00:15:33.747 }, 00:15:33.747 { 00:15:33.747 "name": "BaseBdev3", 00:15:33.747 "uuid": "5d2e2694-7565-4ce9-b664-c5c1cb553f9a", 00:15:33.747 "is_configured": true, 00:15:33.747 "data_offset": 2048, 00:15:33.747 "data_size": 63488 00:15:33.747 }, 00:15:33.747 { 00:15:33.747 "name": "BaseBdev4", 00:15:33.747 "uuid": "00978b4c-a5bb-4f0e-8b49-db5b4621ad74", 00:15:33.747 "is_configured": true, 00:15:33.747 "data_offset": 2048, 00:15:33.747 "data_size": 63488 00:15:33.747 } 00:15:33.747 ] 00:15:33.747 }' 00:15:33.747 20:11:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:33.747 20:11:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.316 20:11:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:15:34.316 20:11:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:34.316 20:11:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:34.316 20:11:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:34.316 20:11:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:15:34.316 20:11:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:34.316 20:11:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:34.316 20:11:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:34.316 20:11:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.316 20:11:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.316 [2024-12-08 20:11:06.065102] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:34.316 20:11:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.316 20:11:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:34.316 "name": "Existed_Raid", 00:15:34.316 "aliases": [ 00:15:34.316 "7efa9c4d-be28-4188-ba90-e289fe94230e" 00:15:34.316 ], 00:15:34.316 "product_name": "Raid Volume", 00:15:34.316 "block_size": 512, 00:15:34.316 "num_blocks": 190464, 00:15:34.316 "uuid": "7efa9c4d-be28-4188-ba90-e289fe94230e", 00:15:34.316 "assigned_rate_limits": { 00:15:34.316 "rw_ios_per_sec": 0, 00:15:34.316 "rw_mbytes_per_sec": 0, 00:15:34.316 "r_mbytes_per_sec": 0, 00:15:34.316 "w_mbytes_per_sec": 0 00:15:34.316 }, 00:15:34.316 "claimed": false, 00:15:34.316 "zoned": false, 00:15:34.316 "supported_io_types": { 00:15:34.316 "read": true, 00:15:34.316 "write": true, 00:15:34.316 "unmap": false, 00:15:34.316 "flush": false, 00:15:34.316 "reset": true, 00:15:34.316 "nvme_admin": false, 00:15:34.316 "nvme_io": false, 00:15:34.316 "nvme_io_md": false, 00:15:34.316 "write_zeroes": true, 00:15:34.316 "zcopy": false, 00:15:34.316 "get_zone_info": false, 00:15:34.316 "zone_management": false, 00:15:34.316 "zone_append": false, 00:15:34.316 "compare": false, 00:15:34.316 "compare_and_write": false, 00:15:34.316 "abort": false, 00:15:34.316 "seek_hole": false, 00:15:34.316 "seek_data": false, 00:15:34.316 "copy": false, 00:15:34.316 "nvme_iov_md": false 00:15:34.316 }, 00:15:34.316 "driver_specific": { 00:15:34.316 "raid": { 00:15:34.316 "uuid": "7efa9c4d-be28-4188-ba90-e289fe94230e", 00:15:34.316 "strip_size_kb": 64, 00:15:34.316 "state": "online", 00:15:34.316 "raid_level": "raid5f", 00:15:34.316 "superblock": true, 00:15:34.316 "num_base_bdevs": 4, 00:15:34.316 "num_base_bdevs_discovered": 4, 00:15:34.316 "num_base_bdevs_operational": 4, 00:15:34.316 "base_bdevs_list": [ 00:15:34.316 { 00:15:34.316 "name": "NewBaseBdev", 00:15:34.316 "uuid": "401caa8f-dec3-4d5c-9940-045e82bd01e2", 00:15:34.316 "is_configured": true, 00:15:34.316 "data_offset": 2048, 00:15:34.316 "data_size": 63488 00:15:34.316 }, 00:15:34.316 { 00:15:34.316 "name": "BaseBdev2", 00:15:34.316 "uuid": "6c62f664-e0af-463c-a2c4-4c0367bb959d", 00:15:34.316 "is_configured": true, 00:15:34.316 "data_offset": 2048, 00:15:34.316 "data_size": 63488 00:15:34.316 }, 00:15:34.316 { 00:15:34.316 "name": "BaseBdev3", 00:15:34.316 "uuid": "5d2e2694-7565-4ce9-b664-c5c1cb553f9a", 00:15:34.316 "is_configured": true, 00:15:34.316 "data_offset": 2048, 00:15:34.316 "data_size": 63488 00:15:34.316 }, 00:15:34.316 { 00:15:34.316 "name": "BaseBdev4", 00:15:34.316 "uuid": "00978b4c-a5bb-4f0e-8b49-db5b4621ad74", 00:15:34.316 "is_configured": true, 00:15:34.316 "data_offset": 2048, 00:15:34.316 "data_size": 63488 00:15:34.316 } 00:15:34.316 ] 00:15:34.316 } 00:15:34.316 } 00:15:34.316 }' 00:15:34.316 20:11:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:34.316 20:11:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:15:34.316 BaseBdev2 00:15:34.316 BaseBdev3 00:15:34.316 BaseBdev4' 00:15:34.316 20:11:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:34.316 20:11:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:34.316 20:11:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:34.317 20:11:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:15:34.317 20:11:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:34.317 20:11:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.317 20:11:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.317 20:11:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.317 20:11:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:34.317 20:11:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:34.317 20:11:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:34.317 20:11:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:34.317 20:11:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.317 20:11:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.317 20:11:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:34.317 20:11:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.317 20:11:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:34.317 20:11:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:34.317 20:11:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:34.577 20:11:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:34.577 20:11:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:34.577 20:11:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.577 20:11:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.577 20:11:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.577 20:11:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:34.577 20:11:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:34.577 20:11:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:34.577 20:11:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:15:34.577 20:11:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:34.577 20:11:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.577 20:11:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.577 20:11:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.577 20:11:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:34.577 20:11:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:34.577 20:11:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:34.577 20:11:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.577 20:11:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.577 [2024-12-08 20:11:06.376352] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:34.577 [2024-12-08 20:11:06.376418] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:34.577 [2024-12-08 20:11:06.376489] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:34.577 [2024-12-08 20:11:06.376829] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:34.577 [2024-12-08 20:11:06.376840] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:15:34.577 20:11:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.577 20:11:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 83113 00:15:34.577 20:11:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 83113 ']' 00:15:34.577 20:11:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 83113 00:15:34.577 20:11:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:15:34.577 20:11:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:34.577 20:11:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83113 00:15:34.577 20:11:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:34.577 killing process with pid 83113 00:15:34.577 20:11:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:34.577 20:11:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83113' 00:15:34.577 20:11:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 83113 00:15:34.577 [2024-12-08 20:11:06.423700] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:34.577 20:11:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 83113 00:15:34.837 [2024-12-08 20:11:06.794801] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:36.236 20:11:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:15:36.236 00:15:36.236 real 0m11.002s 00:15:36.236 user 0m17.490s 00:15:36.236 sys 0m1.967s 00:15:36.236 20:11:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:36.237 ************************************ 00:15:36.237 END TEST raid5f_state_function_test_sb 00:15:36.237 ************************************ 00:15:36.237 20:11:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.237 20:11:07 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:15:36.237 20:11:07 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:15:36.237 20:11:07 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:36.237 20:11:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:36.237 ************************************ 00:15:36.237 START TEST raid5f_superblock_test 00:15:36.237 ************************************ 00:15:36.237 20:11:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid5f 4 00:15:36.237 20:11:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:15:36.237 20:11:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:15:36.237 20:11:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:15:36.237 20:11:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:15:36.237 20:11:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:15:36.237 20:11:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:15:36.237 20:11:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:15:36.237 20:11:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:15:36.237 20:11:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:15:36.237 20:11:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:15:36.237 20:11:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:15:36.237 20:11:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:15:36.237 20:11:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:15:36.237 20:11:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:15:36.237 20:11:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:15:36.237 20:11:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:15:36.237 20:11:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=83778 00:15:36.237 20:11:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:15:36.237 20:11:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 83778 00:15:36.237 20:11:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 83778 ']' 00:15:36.237 20:11:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:36.237 20:11:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:36.237 20:11:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:36.237 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:36.237 20:11:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:36.237 20:11:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.237 [2024-12-08 20:11:08.008066] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:15:36.237 [2024-12-08 20:11:08.008679] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83778 ] 00:15:36.237 [2024-12-08 20:11:08.182265] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:36.522 [2024-12-08 20:11:08.284961] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:36.522 [2024-12-08 20:11:08.476595] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:36.522 [2024-12-08 20:11:08.476686] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:37.109 20:11:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:37.109 20:11:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:15:37.109 20:11:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:15:37.109 20:11:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:37.109 20:11:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:15:37.109 20:11:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:15:37.109 20:11:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:37.109 20:11:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:37.109 20:11:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:37.109 20:11:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:37.109 20:11:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:15:37.109 20:11:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.109 20:11:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.109 malloc1 00:15:37.109 20:11:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.109 20:11:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:37.109 20:11:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.109 20:11:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.109 [2024-12-08 20:11:08.871346] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:37.109 [2024-12-08 20:11:08.871463] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:37.109 [2024-12-08 20:11:08.871502] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:37.109 [2024-12-08 20:11:08.871511] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:37.109 [2024-12-08 20:11:08.873612] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:37.109 [2024-12-08 20:11:08.873647] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:37.109 pt1 00:15:37.109 20:11:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.109 20:11:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:37.109 20:11:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:37.109 20:11:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:15:37.109 20:11:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:15:37.109 20:11:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:37.109 20:11:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:37.109 20:11:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:37.109 20:11:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:37.109 20:11:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:15:37.109 20:11:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.109 20:11:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.109 malloc2 00:15:37.109 20:11:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.109 20:11:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:37.109 20:11:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.109 20:11:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.109 [2024-12-08 20:11:08.922732] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:37.109 [2024-12-08 20:11:08.922832] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:37.109 [2024-12-08 20:11:08.922873] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:37.109 [2024-12-08 20:11:08.922902] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:37.109 [2024-12-08 20:11:08.924929] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:37.109 [2024-12-08 20:11:08.925010] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:37.109 pt2 00:15:37.109 20:11:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.109 20:11:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:37.109 20:11:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:37.109 20:11:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:15:37.109 20:11:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:15:37.109 20:11:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:15:37.109 20:11:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:37.109 20:11:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:37.109 20:11:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:37.109 20:11:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:15:37.109 20:11:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.109 20:11:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.109 malloc3 00:15:37.109 20:11:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.109 20:11:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:37.109 20:11:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.109 20:11:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.109 [2024-12-08 20:11:08.985017] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:37.109 [2024-12-08 20:11:08.985112] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:37.109 [2024-12-08 20:11:08.985146] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:37.109 [2024-12-08 20:11:08.985174] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:37.109 [2024-12-08 20:11:08.987185] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:37.109 [2024-12-08 20:11:08.987247] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:37.109 pt3 00:15:37.109 20:11:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.109 20:11:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:37.109 20:11:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:37.109 20:11:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:15:37.109 20:11:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:15:37.109 20:11:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:15:37.109 20:11:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:37.109 20:11:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:37.109 20:11:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:37.109 20:11:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:15:37.109 20:11:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.109 20:11:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.109 malloc4 00:15:37.109 20:11:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.109 20:11:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:15:37.109 20:11:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.109 20:11:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.109 [2024-12-08 20:11:09.039433] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:15:37.109 [2024-12-08 20:11:09.039536] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:37.109 [2024-12-08 20:11:09.039563] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:37.109 [2024-12-08 20:11:09.039571] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:37.109 [2024-12-08 20:11:09.041541] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:37.110 [2024-12-08 20:11:09.041576] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:15:37.110 pt4 00:15:37.110 20:11:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.110 20:11:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:37.110 20:11:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:37.110 20:11:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:15:37.110 20:11:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.110 20:11:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.110 [2024-12-08 20:11:09.051460] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:37.110 [2024-12-08 20:11:09.053221] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:37.110 [2024-12-08 20:11:09.053304] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:37.110 [2024-12-08 20:11:09.053349] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:15:37.110 [2024-12-08 20:11:09.053524] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:37.110 [2024-12-08 20:11:09.053539] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:37.110 [2024-12-08 20:11:09.053774] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:37.110 [2024-12-08 20:11:09.060617] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:37.110 [2024-12-08 20:11:09.060639] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:37.110 [2024-12-08 20:11:09.060795] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:37.110 20:11:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.110 20:11:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:15:37.110 20:11:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:37.110 20:11:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:37.110 20:11:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:37.110 20:11:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:37.110 20:11:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:37.110 20:11:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:37.110 20:11:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:37.110 20:11:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:37.110 20:11:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:37.110 20:11:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.110 20:11:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:37.110 20:11:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.110 20:11:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.370 20:11:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.370 20:11:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:37.370 "name": "raid_bdev1", 00:15:37.370 "uuid": "120ce3ce-ff89-401b-87d2-9ab96d9bf266", 00:15:37.370 "strip_size_kb": 64, 00:15:37.370 "state": "online", 00:15:37.370 "raid_level": "raid5f", 00:15:37.370 "superblock": true, 00:15:37.370 "num_base_bdevs": 4, 00:15:37.370 "num_base_bdevs_discovered": 4, 00:15:37.370 "num_base_bdevs_operational": 4, 00:15:37.370 "base_bdevs_list": [ 00:15:37.370 { 00:15:37.370 "name": "pt1", 00:15:37.370 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:37.370 "is_configured": true, 00:15:37.370 "data_offset": 2048, 00:15:37.370 "data_size": 63488 00:15:37.370 }, 00:15:37.370 { 00:15:37.370 "name": "pt2", 00:15:37.370 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:37.370 "is_configured": true, 00:15:37.370 "data_offset": 2048, 00:15:37.370 "data_size": 63488 00:15:37.370 }, 00:15:37.370 { 00:15:37.370 "name": "pt3", 00:15:37.370 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:37.370 "is_configured": true, 00:15:37.370 "data_offset": 2048, 00:15:37.370 "data_size": 63488 00:15:37.370 }, 00:15:37.370 { 00:15:37.370 "name": "pt4", 00:15:37.370 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:37.370 "is_configured": true, 00:15:37.370 "data_offset": 2048, 00:15:37.370 "data_size": 63488 00:15:37.370 } 00:15:37.370 ] 00:15:37.370 }' 00:15:37.370 20:11:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:37.370 20:11:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.630 20:11:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:15:37.630 20:11:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:37.630 20:11:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:37.630 20:11:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:37.630 20:11:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:37.630 20:11:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:37.630 20:11:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:37.630 20:11:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:37.630 20:11:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.630 20:11:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.630 [2024-12-08 20:11:09.492461] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:37.630 20:11:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.630 20:11:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:37.630 "name": "raid_bdev1", 00:15:37.630 "aliases": [ 00:15:37.630 "120ce3ce-ff89-401b-87d2-9ab96d9bf266" 00:15:37.630 ], 00:15:37.630 "product_name": "Raid Volume", 00:15:37.630 "block_size": 512, 00:15:37.630 "num_blocks": 190464, 00:15:37.630 "uuid": "120ce3ce-ff89-401b-87d2-9ab96d9bf266", 00:15:37.630 "assigned_rate_limits": { 00:15:37.630 "rw_ios_per_sec": 0, 00:15:37.630 "rw_mbytes_per_sec": 0, 00:15:37.630 "r_mbytes_per_sec": 0, 00:15:37.630 "w_mbytes_per_sec": 0 00:15:37.630 }, 00:15:37.630 "claimed": false, 00:15:37.630 "zoned": false, 00:15:37.630 "supported_io_types": { 00:15:37.630 "read": true, 00:15:37.630 "write": true, 00:15:37.630 "unmap": false, 00:15:37.630 "flush": false, 00:15:37.630 "reset": true, 00:15:37.630 "nvme_admin": false, 00:15:37.630 "nvme_io": false, 00:15:37.630 "nvme_io_md": false, 00:15:37.630 "write_zeroes": true, 00:15:37.630 "zcopy": false, 00:15:37.630 "get_zone_info": false, 00:15:37.630 "zone_management": false, 00:15:37.630 "zone_append": false, 00:15:37.630 "compare": false, 00:15:37.630 "compare_and_write": false, 00:15:37.630 "abort": false, 00:15:37.630 "seek_hole": false, 00:15:37.630 "seek_data": false, 00:15:37.630 "copy": false, 00:15:37.630 "nvme_iov_md": false 00:15:37.630 }, 00:15:37.630 "driver_specific": { 00:15:37.630 "raid": { 00:15:37.630 "uuid": "120ce3ce-ff89-401b-87d2-9ab96d9bf266", 00:15:37.630 "strip_size_kb": 64, 00:15:37.630 "state": "online", 00:15:37.630 "raid_level": "raid5f", 00:15:37.631 "superblock": true, 00:15:37.631 "num_base_bdevs": 4, 00:15:37.631 "num_base_bdevs_discovered": 4, 00:15:37.631 "num_base_bdevs_operational": 4, 00:15:37.631 "base_bdevs_list": [ 00:15:37.631 { 00:15:37.631 "name": "pt1", 00:15:37.631 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:37.631 "is_configured": true, 00:15:37.631 "data_offset": 2048, 00:15:37.631 "data_size": 63488 00:15:37.631 }, 00:15:37.631 { 00:15:37.631 "name": "pt2", 00:15:37.631 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:37.631 "is_configured": true, 00:15:37.631 "data_offset": 2048, 00:15:37.631 "data_size": 63488 00:15:37.631 }, 00:15:37.631 { 00:15:37.631 "name": "pt3", 00:15:37.631 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:37.631 "is_configured": true, 00:15:37.631 "data_offset": 2048, 00:15:37.631 "data_size": 63488 00:15:37.631 }, 00:15:37.631 { 00:15:37.631 "name": "pt4", 00:15:37.631 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:37.631 "is_configured": true, 00:15:37.631 "data_offset": 2048, 00:15:37.631 "data_size": 63488 00:15:37.631 } 00:15:37.631 ] 00:15:37.631 } 00:15:37.631 } 00:15:37.631 }' 00:15:37.631 20:11:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:37.631 20:11:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:37.631 pt2 00:15:37.631 pt3 00:15:37.631 pt4' 00:15:37.631 20:11:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:37.891 20:11:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:37.891 20:11:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:37.891 20:11:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:37.891 20:11:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:37.891 20:11:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.891 20:11:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.891 20:11:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.891 20:11:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:37.891 20:11:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:37.891 20:11:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:37.891 20:11:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:37.891 20:11:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:37.891 20:11:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.891 20:11:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.891 20:11:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.891 20:11:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:37.891 20:11:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:37.891 20:11:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:37.891 20:11:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:15:37.891 20:11:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:37.891 20:11:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.891 20:11:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.891 20:11:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.891 20:11:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:37.891 20:11:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:37.891 20:11:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:37.891 20:11:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:15:37.891 20:11:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:37.891 20:11:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.891 20:11:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.891 20:11:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.891 20:11:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:37.891 20:11:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:37.891 20:11:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:37.891 20:11:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:15:37.891 20:11:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.891 20:11:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.891 [2024-12-08 20:11:09.827834] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:37.891 20:11:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.153 20:11:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=120ce3ce-ff89-401b-87d2-9ab96d9bf266 00:15:38.153 20:11:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 120ce3ce-ff89-401b-87d2-9ab96d9bf266 ']' 00:15:38.153 20:11:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:38.153 20:11:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.153 20:11:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.153 [2024-12-08 20:11:09.871613] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:38.153 [2024-12-08 20:11:09.871636] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:38.153 [2024-12-08 20:11:09.871708] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:38.153 [2024-12-08 20:11:09.871793] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:38.153 [2024-12-08 20:11:09.871807] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:38.153 20:11:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.153 20:11:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:15:38.153 20:11:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:38.153 20:11:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.153 20:11:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.153 20:11:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.153 20:11:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:15:38.153 20:11:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:15:38.154 20:11:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:38.154 20:11:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:15:38.154 20:11:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.154 20:11:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.154 20:11:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.154 20:11:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:38.154 20:11:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:15:38.154 20:11:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.154 20:11:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.154 20:11:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.154 20:11:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:38.154 20:11:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:15:38.154 20:11:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.154 20:11:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.154 20:11:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.154 20:11:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:38.154 20:11:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:15:38.154 20:11:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.154 20:11:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.154 20:11:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.154 20:11:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:15:38.154 20:11:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.154 20:11:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:38.154 20:11:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.154 20:11:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.154 20:11:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:15:38.154 20:11:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:15:38.154 20:11:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:15:38.154 20:11:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:15:38.154 20:11:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:15:38.154 20:11:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:38.154 20:11:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:15:38.154 20:11:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:38.154 20:11:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:15:38.154 20:11:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.154 20:11:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.154 [2024-12-08 20:11:10.039361] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:38.154 [2024-12-08 20:11:10.041174] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:38.154 [2024-12-08 20:11:10.041219] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:15:38.154 [2024-12-08 20:11:10.041250] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:15:38.154 [2024-12-08 20:11:10.041295] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:15:38.154 [2024-12-08 20:11:10.041353] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:15:38.154 [2024-12-08 20:11:10.041372] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:15:38.154 [2024-12-08 20:11:10.041389] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:15:38.154 [2024-12-08 20:11:10.041401] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:38.154 [2024-12-08 20:11:10.041411] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:15:38.154 request: 00:15:38.154 { 00:15:38.154 "name": "raid_bdev1", 00:15:38.154 "raid_level": "raid5f", 00:15:38.154 "base_bdevs": [ 00:15:38.154 "malloc1", 00:15:38.154 "malloc2", 00:15:38.154 "malloc3", 00:15:38.154 "malloc4" 00:15:38.154 ], 00:15:38.154 "strip_size_kb": 64, 00:15:38.154 "superblock": false, 00:15:38.154 "method": "bdev_raid_create", 00:15:38.154 "req_id": 1 00:15:38.154 } 00:15:38.154 Got JSON-RPC error response 00:15:38.154 response: 00:15:38.154 { 00:15:38.154 "code": -17, 00:15:38.154 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:38.154 } 00:15:38.154 20:11:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:38.154 20:11:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:15:38.154 20:11:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:38.154 20:11:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:38.154 20:11:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:38.154 20:11:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:38.154 20:11:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.154 20:11:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.154 20:11:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:15:38.154 20:11:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.154 20:11:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:15:38.154 20:11:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:15:38.154 20:11:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:38.154 20:11:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.154 20:11:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.154 [2024-12-08 20:11:10.107212] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:38.154 [2024-12-08 20:11:10.107295] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:38.154 [2024-12-08 20:11:10.107328] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:15:38.154 [2024-12-08 20:11:10.107356] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:38.154 [2024-12-08 20:11:10.109506] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:38.154 [2024-12-08 20:11:10.109574] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:38.154 [2024-12-08 20:11:10.109661] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:38.154 [2024-12-08 20:11:10.109734] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:38.154 pt1 00:15:38.154 20:11:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.154 20:11:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:15:38.154 20:11:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:38.154 20:11:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:38.154 20:11:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:38.154 20:11:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:38.154 20:11:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:38.154 20:11:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:38.154 20:11:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:38.154 20:11:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:38.154 20:11:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:38.154 20:11:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:38.154 20:11:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:38.154 20:11:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.154 20:11:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.415 20:11:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.415 20:11:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:38.415 "name": "raid_bdev1", 00:15:38.415 "uuid": "120ce3ce-ff89-401b-87d2-9ab96d9bf266", 00:15:38.415 "strip_size_kb": 64, 00:15:38.415 "state": "configuring", 00:15:38.415 "raid_level": "raid5f", 00:15:38.415 "superblock": true, 00:15:38.415 "num_base_bdevs": 4, 00:15:38.415 "num_base_bdevs_discovered": 1, 00:15:38.415 "num_base_bdevs_operational": 4, 00:15:38.415 "base_bdevs_list": [ 00:15:38.415 { 00:15:38.415 "name": "pt1", 00:15:38.415 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:38.415 "is_configured": true, 00:15:38.415 "data_offset": 2048, 00:15:38.415 "data_size": 63488 00:15:38.415 }, 00:15:38.415 { 00:15:38.415 "name": null, 00:15:38.415 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:38.415 "is_configured": false, 00:15:38.415 "data_offset": 2048, 00:15:38.415 "data_size": 63488 00:15:38.415 }, 00:15:38.415 { 00:15:38.415 "name": null, 00:15:38.415 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:38.415 "is_configured": false, 00:15:38.415 "data_offset": 2048, 00:15:38.415 "data_size": 63488 00:15:38.415 }, 00:15:38.415 { 00:15:38.415 "name": null, 00:15:38.415 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:38.415 "is_configured": false, 00:15:38.415 "data_offset": 2048, 00:15:38.415 "data_size": 63488 00:15:38.415 } 00:15:38.415 ] 00:15:38.415 }' 00:15:38.415 20:11:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:38.415 20:11:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.675 20:11:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:15:38.676 20:11:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:38.676 20:11:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.676 20:11:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.676 [2024-12-08 20:11:10.522519] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:38.676 [2024-12-08 20:11:10.522578] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:38.676 [2024-12-08 20:11:10.522593] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:15:38.676 [2024-12-08 20:11:10.522603] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:38.676 [2024-12-08 20:11:10.522996] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:38.676 [2024-12-08 20:11:10.523016] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:38.676 [2024-12-08 20:11:10.523079] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:38.676 [2024-12-08 20:11:10.523102] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:38.676 pt2 00:15:38.676 20:11:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.676 20:11:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:15:38.676 20:11:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.676 20:11:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.676 [2024-12-08 20:11:10.534506] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:15:38.676 20:11:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.676 20:11:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:15:38.676 20:11:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:38.676 20:11:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:38.676 20:11:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:38.676 20:11:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:38.676 20:11:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:38.676 20:11:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:38.676 20:11:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:38.676 20:11:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:38.676 20:11:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:38.676 20:11:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:38.676 20:11:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:38.676 20:11:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.676 20:11:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.676 20:11:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.676 20:11:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:38.676 "name": "raid_bdev1", 00:15:38.676 "uuid": "120ce3ce-ff89-401b-87d2-9ab96d9bf266", 00:15:38.676 "strip_size_kb": 64, 00:15:38.676 "state": "configuring", 00:15:38.676 "raid_level": "raid5f", 00:15:38.676 "superblock": true, 00:15:38.676 "num_base_bdevs": 4, 00:15:38.676 "num_base_bdevs_discovered": 1, 00:15:38.676 "num_base_bdevs_operational": 4, 00:15:38.676 "base_bdevs_list": [ 00:15:38.676 { 00:15:38.676 "name": "pt1", 00:15:38.676 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:38.676 "is_configured": true, 00:15:38.676 "data_offset": 2048, 00:15:38.676 "data_size": 63488 00:15:38.676 }, 00:15:38.676 { 00:15:38.676 "name": null, 00:15:38.676 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:38.676 "is_configured": false, 00:15:38.676 "data_offset": 0, 00:15:38.676 "data_size": 63488 00:15:38.676 }, 00:15:38.676 { 00:15:38.676 "name": null, 00:15:38.676 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:38.676 "is_configured": false, 00:15:38.676 "data_offset": 2048, 00:15:38.676 "data_size": 63488 00:15:38.676 }, 00:15:38.676 { 00:15:38.676 "name": null, 00:15:38.676 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:38.676 "is_configured": false, 00:15:38.676 "data_offset": 2048, 00:15:38.676 "data_size": 63488 00:15:38.676 } 00:15:38.676 ] 00:15:38.676 }' 00:15:38.676 20:11:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:38.676 20:11:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.246 20:11:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:15:39.246 20:11:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:39.246 20:11:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:39.246 20:11:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.246 20:11:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.246 [2024-12-08 20:11:10.961776] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:39.246 [2024-12-08 20:11:10.961875] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:39.246 [2024-12-08 20:11:10.961913] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:15:39.246 [2024-12-08 20:11:10.961939] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:39.246 [2024-12-08 20:11:10.962436] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:39.246 [2024-12-08 20:11:10.962498] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:39.246 [2024-12-08 20:11:10.962631] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:39.246 [2024-12-08 20:11:10.962682] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:39.246 pt2 00:15:39.246 20:11:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.246 20:11:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:39.246 20:11:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:39.246 20:11:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:39.246 20:11:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.246 20:11:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.246 [2024-12-08 20:11:10.973734] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:39.246 [2024-12-08 20:11:10.973777] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:39.246 [2024-12-08 20:11:10.973814] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:15:39.246 [2024-12-08 20:11:10.973824] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:39.246 [2024-12-08 20:11:10.974181] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:39.246 [2024-12-08 20:11:10.974203] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:39.246 [2024-12-08 20:11:10.974263] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:39.246 [2024-12-08 20:11:10.974286] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:39.246 pt3 00:15:39.246 20:11:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.246 20:11:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:39.246 20:11:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:39.246 20:11:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:15:39.246 20:11:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.246 20:11:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.246 [2024-12-08 20:11:10.985686] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:15:39.246 [2024-12-08 20:11:10.985725] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:39.246 [2024-12-08 20:11:10.985755] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:15:39.246 [2024-12-08 20:11:10.985762] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:39.246 [2024-12-08 20:11:10.986103] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:39.246 [2024-12-08 20:11:10.986140] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:15:39.246 [2024-12-08 20:11:10.986196] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:15:39.246 [2024-12-08 20:11:10.986216] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:15:39.246 [2024-12-08 20:11:10.986343] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:39.246 [2024-12-08 20:11:10.986355] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:39.246 [2024-12-08 20:11:10.986586] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:15:39.246 [2024-12-08 20:11:10.993491] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:39.246 [2024-12-08 20:11:10.993512] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:15:39.246 [2024-12-08 20:11:10.993653] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:39.246 pt4 00:15:39.246 20:11:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.246 20:11:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:39.246 20:11:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:39.246 20:11:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:15:39.246 20:11:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:39.246 20:11:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:39.246 20:11:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:39.246 20:11:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:39.246 20:11:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:39.246 20:11:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:39.246 20:11:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:39.246 20:11:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:39.246 20:11:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:39.246 20:11:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.246 20:11:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:39.246 20:11:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.246 20:11:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.246 20:11:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.246 20:11:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:39.246 "name": "raid_bdev1", 00:15:39.246 "uuid": "120ce3ce-ff89-401b-87d2-9ab96d9bf266", 00:15:39.246 "strip_size_kb": 64, 00:15:39.246 "state": "online", 00:15:39.246 "raid_level": "raid5f", 00:15:39.246 "superblock": true, 00:15:39.246 "num_base_bdevs": 4, 00:15:39.246 "num_base_bdevs_discovered": 4, 00:15:39.246 "num_base_bdevs_operational": 4, 00:15:39.246 "base_bdevs_list": [ 00:15:39.246 { 00:15:39.246 "name": "pt1", 00:15:39.246 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:39.246 "is_configured": true, 00:15:39.246 "data_offset": 2048, 00:15:39.246 "data_size": 63488 00:15:39.246 }, 00:15:39.246 { 00:15:39.246 "name": "pt2", 00:15:39.246 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:39.246 "is_configured": true, 00:15:39.246 "data_offset": 2048, 00:15:39.246 "data_size": 63488 00:15:39.246 }, 00:15:39.246 { 00:15:39.246 "name": "pt3", 00:15:39.246 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:39.246 "is_configured": true, 00:15:39.246 "data_offset": 2048, 00:15:39.246 "data_size": 63488 00:15:39.246 }, 00:15:39.246 { 00:15:39.246 "name": "pt4", 00:15:39.246 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:39.246 "is_configured": true, 00:15:39.246 "data_offset": 2048, 00:15:39.246 "data_size": 63488 00:15:39.246 } 00:15:39.246 ] 00:15:39.246 }' 00:15:39.246 20:11:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:39.246 20:11:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.505 20:11:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:15:39.505 20:11:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:39.505 20:11:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:39.505 20:11:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:39.505 20:11:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:39.505 20:11:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:39.505 20:11:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:39.505 20:11:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:39.505 20:11:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.505 20:11:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.505 [2024-12-08 20:11:11.437258] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:39.505 20:11:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.505 20:11:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:39.505 "name": "raid_bdev1", 00:15:39.505 "aliases": [ 00:15:39.505 "120ce3ce-ff89-401b-87d2-9ab96d9bf266" 00:15:39.505 ], 00:15:39.505 "product_name": "Raid Volume", 00:15:39.505 "block_size": 512, 00:15:39.505 "num_blocks": 190464, 00:15:39.505 "uuid": "120ce3ce-ff89-401b-87d2-9ab96d9bf266", 00:15:39.505 "assigned_rate_limits": { 00:15:39.505 "rw_ios_per_sec": 0, 00:15:39.505 "rw_mbytes_per_sec": 0, 00:15:39.505 "r_mbytes_per_sec": 0, 00:15:39.505 "w_mbytes_per_sec": 0 00:15:39.505 }, 00:15:39.505 "claimed": false, 00:15:39.505 "zoned": false, 00:15:39.505 "supported_io_types": { 00:15:39.505 "read": true, 00:15:39.505 "write": true, 00:15:39.505 "unmap": false, 00:15:39.505 "flush": false, 00:15:39.505 "reset": true, 00:15:39.505 "nvme_admin": false, 00:15:39.505 "nvme_io": false, 00:15:39.505 "nvme_io_md": false, 00:15:39.505 "write_zeroes": true, 00:15:39.505 "zcopy": false, 00:15:39.505 "get_zone_info": false, 00:15:39.505 "zone_management": false, 00:15:39.505 "zone_append": false, 00:15:39.505 "compare": false, 00:15:39.505 "compare_and_write": false, 00:15:39.505 "abort": false, 00:15:39.505 "seek_hole": false, 00:15:39.505 "seek_data": false, 00:15:39.505 "copy": false, 00:15:39.505 "nvme_iov_md": false 00:15:39.505 }, 00:15:39.505 "driver_specific": { 00:15:39.505 "raid": { 00:15:39.505 "uuid": "120ce3ce-ff89-401b-87d2-9ab96d9bf266", 00:15:39.505 "strip_size_kb": 64, 00:15:39.505 "state": "online", 00:15:39.505 "raid_level": "raid5f", 00:15:39.505 "superblock": true, 00:15:39.505 "num_base_bdevs": 4, 00:15:39.505 "num_base_bdevs_discovered": 4, 00:15:39.506 "num_base_bdevs_operational": 4, 00:15:39.506 "base_bdevs_list": [ 00:15:39.506 { 00:15:39.506 "name": "pt1", 00:15:39.506 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:39.506 "is_configured": true, 00:15:39.506 "data_offset": 2048, 00:15:39.506 "data_size": 63488 00:15:39.506 }, 00:15:39.506 { 00:15:39.506 "name": "pt2", 00:15:39.506 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:39.506 "is_configured": true, 00:15:39.506 "data_offset": 2048, 00:15:39.506 "data_size": 63488 00:15:39.506 }, 00:15:39.506 { 00:15:39.506 "name": "pt3", 00:15:39.506 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:39.506 "is_configured": true, 00:15:39.506 "data_offset": 2048, 00:15:39.506 "data_size": 63488 00:15:39.506 }, 00:15:39.506 { 00:15:39.506 "name": "pt4", 00:15:39.506 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:39.506 "is_configured": true, 00:15:39.506 "data_offset": 2048, 00:15:39.506 "data_size": 63488 00:15:39.506 } 00:15:39.506 ] 00:15:39.506 } 00:15:39.506 } 00:15:39.506 }' 00:15:39.506 20:11:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:39.765 20:11:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:39.765 pt2 00:15:39.765 pt3 00:15:39.765 pt4' 00:15:39.765 20:11:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:39.765 20:11:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:39.765 20:11:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:39.765 20:11:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:39.765 20:11:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:39.765 20:11:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.765 20:11:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.765 20:11:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.765 20:11:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:39.765 20:11:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:39.765 20:11:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:39.765 20:11:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:39.765 20:11:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.765 20:11:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:39.765 20:11:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.765 20:11:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.765 20:11:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:39.765 20:11:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:39.765 20:11:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:39.765 20:11:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:15:39.765 20:11:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.765 20:11:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.765 20:11:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:39.766 20:11:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.766 20:11:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:39.766 20:11:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:39.766 20:11:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:39.766 20:11:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:15:39.766 20:11:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:39.766 20:11:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.766 20:11:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.766 20:11:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.766 20:11:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:39.766 20:11:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:39.766 20:11:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:39.766 20:11:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:15:39.766 20:11:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.766 20:11:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.028 [2024-12-08 20:11:11.744757] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:40.028 20:11:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.028 20:11:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 120ce3ce-ff89-401b-87d2-9ab96d9bf266 '!=' 120ce3ce-ff89-401b-87d2-9ab96d9bf266 ']' 00:15:40.028 20:11:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:15:40.028 20:11:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:40.028 20:11:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:15:40.028 20:11:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:15:40.028 20:11:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.028 20:11:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.028 [2024-12-08 20:11:11.788546] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:15:40.028 20:11:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.028 20:11:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:40.028 20:11:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:40.028 20:11:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:40.028 20:11:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:40.028 20:11:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:40.028 20:11:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:40.028 20:11:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:40.028 20:11:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:40.028 20:11:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:40.028 20:11:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:40.028 20:11:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:40.028 20:11:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:40.028 20:11:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.028 20:11:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.028 20:11:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.028 20:11:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:40.028 "name": "raid_bdev1", 00:15:40.028 "uuid": "120ce3ce-ff89-401b-87d2-9ab96d9bf266", 00:15:40.028 "strip_size_kb": 64, 00:15:40.028 "state": "online", 00:15:40.028 "raid_level": "raid5f", 00:15:40.028 "superblock": true, 00:15:40.028 "num_base_bdevs": 4, 00:15:40.028 "num_base_bdevs_discovered": 3, 00:15:40.028 "num_base_bdevs_operational": 3, 00:15:40.028 "base_bdevs_list": [ 00:15:40.028 { 00:15:40.028 "name": null, 00:15:40.028 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:40.028 "is_configured": false, 00:15:40.028 "data_offset": 0, 00:15:40.028 "data_size": 63488 00:15:40.028 }, 00:15:40.028 { 00:15:40.028 "name": "pt2", 00:15:40.028 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:40.028 "is_configured": true, 00:15:40.028 "data_offset": 2048, 00:15:40.028 "data_size": 63488 00:15:40.028 }, 00:15:40.028 { 00:15:40.028 "name": "pt3", 00:15:40.028 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:40.028 "is_configured": true, 00:15:40.028 "data_offset": 2048, 00:15:40.028 "data_size": 63488 00:15:40.028 }, 00:15:40.028 { 00:15:40.028 "name": "pt4", 00:15:40.028 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:40.028 "is_configured": true, 00:15:40.028 "data_offset": 2048, 00:15:40.028 "data_size": 63488 00:15:40.028 } 00:15:40.028 ] 00:15:40.028 }' 00:15:40.028 20:11:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:40.028 20:11:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.287 20:11:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:40.287 20:11:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.287 20:11:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.287 [2024-12-08 20:11:12.199799] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:40.287 [2024-12-08 20:11:12.199868] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:40.287 [2024-12-08 20:11:12.199965] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:40.287 [2024-12-08 20:11:12.200091] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:40.287 [2024-12-08 20:11:12.200139] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:15:40.287 20:11:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.287 20:11:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:40.287 20:11:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.287 20:11:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.287 20:11:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:15:40.287 20:11:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.287 20:11:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:15:40.287 20:11:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:15:40.287 20:11:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:15:40.287 20:11:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:40.287 20:11:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:15:40.287 20:11:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.287 20:11:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.547 20:11:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.547 20:11:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:40.547 20:11:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:40.547 20:11:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:15:40.547 20:11:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.547 20:11:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.547 20:11:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.547 20:11:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:40.547 20:11:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:40.547 20:11:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:15:40.547 20:11:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.547 20:11:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.547 20:11:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.547 20:11:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:40.547 20:11:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:40.547 20:11:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:15:40.547 20:11:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:40.547 20:11:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:40.547 20:11:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.547 20:11:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.547 [2024-12-08 20:11:12.295626] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:40.547 [2024-12-08 20:11:12.295671] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:40.547 [2024-12-08 20:11:12.295704] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:15:40.547 [2024-12-08 20:11:12.295713] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:40.547 [2024-12-08 20:11:12.297778] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:40.547 [2024-12-08 20:11:12.297812] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:40.547 [2024-12-08 20:11:12.297888] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:40.547 [2024-12-08 20:11:12.297929] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:40.547 pt2 00:15:40.547 20:11:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.547 20:11:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:15:40.547 20:11:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:40.547 20:11:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:40.547 20:11:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:40.547 20:11:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:40.547 20:11:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:40.547 20:11:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:40.547 20:11:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:40.547 20:11:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:40.547 20:11:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:40.547 20:11:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:40.547 20:11:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:40.547 20:11:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.547 20:11:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.547 20:11:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.547 20:11:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:40.547 "name": "raid_bdev1", 00:15:40.547 "uuid": "120ce3ce-ff89-401b-87d2-9ab96d9bf266", 00:15:40.547 "strip_size_kb": 64, 00:15:40.547 "state": "configuring", 00:15:40.547 "raid_level": "raid5f", 00:15:40.547 "superblock": true, 00:15:40.547 "num_base_bdevs": 4, 00:15:40.547 "num_base_bdevs_discovered": 1, 00:15:40.547 "num_base_bdevs_operational": 3, 00:15:40.547 "base_bdevs_list": [ 00:15:40.547 { 00:15:40.547 "name": null, 00:15:40.547 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:40.547 "is_configured": false, 00:15:40.547 "data_offset": 2048, 00:15:40.547 "data_size": 63488 00:15:40.547 }, 00:15:40.547 { 00:15:40.547 "name": "pt2", 00:15:40.547 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:40.547 "is_configured": true, 00:15:40.547 "data_offset": 2048, 00:15:40.547 "data_size": 63488 00:15:40.547 }, 00:15:40.547 { 00:15:40.547 "name": null, 00:15:40.547 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:40.547 "is_configured": false, 00:15:40.547 "data_offset": 2048, 00:15:40.547 "data_size": 63488 00:15:40.547 }, 00:15:40.547 { 00:15:40.547 "name": null, 00:15:40.547 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:40.547 "is_configured": false, 00:15:40.547 "data_offset": 2048, 00:15:40.547 "data_size": 63488 00:15:40.547 } 00:15:40.547 ] 00:15:40.547 }' 00:15:40.547 20:11:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:40.547 20:11:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.807 20:11:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:15:40.807 20:11:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:40.807 20:11:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:40.807 20:11:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.807 20:11:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.807 [2024-12-08 20:11:12.679003] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:40.807 [2024-12-08 20:11:12.679106] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:40.807 [2024-12-08 20:11:12.679145] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:15:40.807 [2024-12-08 20:11:12.679175] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:40.807 [2024-12-08 20:11:12.679627] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:40.807 [2024-12-08 20:11:12.679687] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:40.807 [2024-12-08 20:11:12.679810] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:40.807 [2024-12-08 20:11:12.679860] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:40.807 pt3 00:15:40.807 20:11:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.807 20:11:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:15:40.807 20:11:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:40.807 20:11:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:40.807 20:11:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:40.807 20:11:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:40.807 20:11:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:40.807 20:11:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:40.807 20:11:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:40.807 20:11:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:40.807 20:11:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:40.807 20:11:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:40.807 20:11:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.807 20:11:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.807 20:11:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:40.807 20:11:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.807 20:11:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:40.807 "name": "raid_bdev1", 00:15:40.807 "uuid": "120ce3ce-ff89-401b-87d2-9ab96d9bf266", 00:15:40.807 "strip_size_kb": 64, 00:15:40.807 "state": "configuring", 00:15:40.807 "raid_level": "raid5f", 00:15:40.807 "superblock": true, 00:15:40.807 "num_base_bdevs": 4, 00:15:40.807 "num_base_bdevs_discovered": 2, 00:15:40.807 "num_base_bdevs_operational": 3, 00:15:40.807 "base_bdevs_list": [ 00:15:40.807 { 00:15:40.807 "name": null, 00:15:40.807 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:40.807 "is_configured": false, 00:15:40.807 "data_offset": 2048, 00:15:40.807 "data_size": 63488 00:15:40.807 }, 00:15:40.807 { 00:15:40.807 "name": "pt2", 00:15:40.807 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:40.807 "is_configured": true, 00:15:40.807 "data_offset": 2048, 00:15:40.807 "data_size": 63488 00:15:40.807 }, 00:15:40.807 { 00:15:40.807 "name": "pt3", 00:15:40.807 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:40.807 "is_configured": true, 00:15:40.807 "data_offset": 2048, 00:15:40.807 "data_size": 63488 00:15:40.807 }, 00:15:40.807 { 00:15:40.807 "name": null, 00:15:40.807 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:40.807 "is_configured": false, 00:15:40.807 "data_offset": 2048, 00:15:40.807 "data_size": 63488 00:15:40.807 } 00:15:40.807 ] 00:15:40.807 }' 00:15:40.807 20:11:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:40.807 20:11:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.377 20:11:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:15:41.377 20:11:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:41.377 20:11:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:15:41.377 20:11:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:15:41.377 20:11:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.377 20:11:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.377 [2024-12-08 20:11:13.074348] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:15:41.377 [2024-12-08 20:11:13.074447] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:41.377 [2024-12-08 20:11:13.074475] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:15:41.377 [2024-12-08 20:11:13.074484] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:41.377 [2024-12-08 20:11:13.074955] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:41.377 [2024-12-08 20:11:13.074989] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:15:41.377 [2024-12-08 20:11:13.075096] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:15:41.377 [2024-12-08 20:11:13.075140] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:15:41.377 [2024-12-08 20:11:13.075290] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:41.377 [2024-12-08 20:11:13.075299] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:41.377 [2024-12-08 20:11:13.075547] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:15:41.377 [2024-12-08 20:11:13.082777] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:41.377 [2024-12-08 20:11:13.082800] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:15:41.377 [2024-12-08 20:11:13.083117] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:41.377 pt4 00:15:41.377 20:11:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.377 20:11:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:41.377 20:11:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:41.377 20:11:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:41.377 20:11:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:41.377 20:11:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:41.377 20:11:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:41.377 20:11:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:41.377 20:11:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:41.377 20:11:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:41.377 20:11:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:41.377 20:11:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:41.377 20:11:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.377 20:11:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.377 20:11:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.377 20:11:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.377 20:11:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:41.377 "name": "raid_bdev1", 00:15:41.377 "uuid": "120ce3ce-ff89-401b-87d2-9ab96d9bf266", 00:15:41.377 "strip_size_kb": 64, 00:15:41.377 "state": "online", 00:15:41.377 "raid_level": "raid5f", 00:15:41.377 "superblock": true, 00:15:41.377 "num_base_bdevs": 4, 00:15:41.377 "num_base_bdevs_discovered": 3, 00:15:41.377 "num_base_bdevs_operational": 3, 00:15:41.377 "base_bdevs_list": [ 00:15:41.377 { 00:15:41.377 "name": null, 00:15:41.377 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:41.377 "is_configured": false, 00:15:41.377 "data_offset": 2048, 00:15:41.377 "data_size": 63488 00:15:41.377 }, 00:15:41.377 { 00:15:41.377 "name": "pt2", 00:15:41.378 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:41.378 "is_configured": true, 00:15:41.378 "data_offset": 2048, 00:15:41.378 "data_size": 63488 00:15:41.378 }, 00:15:41.378 { 00:15:41.378 "name": "pt3", 00:15:41.378 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:41.378 "is_configured": true, 00:15:41.378 "data_offset": 2048, 00:15:41.378 "data_size": 63488 00:15:41.378 }, 00:15:41.378 { 00:15:41.378 "name": "pt4", 00:15:41.378 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:41.378 "is_configured": true, 00:15:41.378 "data_offset": 2048, 00:15:41.378 "data_size": 63488 00:15:41.378 } 00:15:41.378 ] 00:15:41.378 }' 00:15:41.378 20:11:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:41.378 20:11:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.637 20:11:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:41.637 20:11:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.637 20:11:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.637 [2024-12-08 20:11:13.487067] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:41.637 [2024-12-08 20:11:13.487135] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:41.637 [2024-12-08 20:11:13.487226] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:41.637 [2024-12-08 20:11:13.487354] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:41.637 [2024-12-08 20:11:13.487432] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:15:41.637 20:11:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.637 20:11:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.637 20:11:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:15:41.637 20:11:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.637 20:11:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.637 20:11:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.637 20:11:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:15:41.637 20:11:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:15:41.637 20:11:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:15:41.637 20:11:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:15:41.637 20:11:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:15:41.637 20:11:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.637 20:11:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.637 20:11:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.637 20:11:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:41.637 20:11:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.637 20:11:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.637 [2024-12-08 20:11:13.558961] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:41.637 [2024-12-08 20:11:13.559064] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:41.637 [2024-12-08 20:11:13.559107] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:15:41.637 [2024-12-08 20:11:13.559140] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:41.637 [2024-12-08 20:11:13.561415] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:41.637 [2024-12-08 20:11:13.561487] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:41.637 [2024-12-08 20:11:13.561594] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:41.637 [2024-12-08 20:11:13.561673] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:41.637 [2024-12-08 20:11:13.561885] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:15:41.637 [2024-12-08 20:11:13.561960] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:41.637 [2024-12-08 20:11:13.562026] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:15:41.637 [2024-12-08 20:11:13.562154] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:41.637 [2024-12-08 20:11:13.562324] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:41.637 pt1 00:15:41.637 20:11:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.637 20:11:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:15:41.637 20:11:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:15:41.637 20:11:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:41.637 20:11:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:41.637 20:11:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:41.637 20:11:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:41.637 20:11:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:41.637 20:11:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:41.637 20:11:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:41.637 20:11:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:41.637 20:11:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:41.637 20:11:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:41.637 20:11:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.637 20:11:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.637 20:11:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.637 20:11:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.637 20:11:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:41.637 "name": "raid_bdev1", 00:15:41.637 "uuid": "120ce3ce-ff89-401b-87d2-9ab96d9bf266", 00:15:41.637 "strip_size_kb": 64, 00:15:41.638 "state": "configuring", 00:15:41.638 "raid_level": "raid5f", 00:15:41.638 "superblock": true, 00:15:41.638 "num_base_bdevs": 4, 00:15:41.638 "num_base_bdevs_discovered": 2, 00:15:41.638 "num_base_bdevs_operational": 3, 00:15:41.638 "base_bdevs_list": [ 00:15:41.638 { 00:15:41.638 "name": null, 00:15:41.638 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:41.638 "is_configured": false, 00:15:41.638 "data_offset": 2048, 00:15:41.638 "data_size": 63488 00:15:41.638 }, 00:15:41.638 { 00:15:41.638 "name": "pt2", 00:15:41.638 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:41.638 "is_configured": true, 00:15:41.638 "data_offset": 2048, 00:15:41.638 "data_size": 63488 00:15:41.638 }, 00:15:41.638 { 00:15:41.638 "name": "pt3", 00:15:41.638 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:41.638 "is_configured": true, 00:15:41.638 "data_offset": 2048, 00:15:41.638 "data_size": 63488 00:15:41.638 }, 00:15:41.638 { 00:15:41.638 "name": null, 00:15:41.638 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:41.638 "is_configured": false, 00:15:41.638 "data_offset": 2048, 00:15:41.638 "data_size": 63488 00:15:41.638 } 00:15:41.638 ] 00:15:41.638 }' 00:15:41.638 20:11:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:41.638 20:11:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.206 20:11:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:15:42.206 20:11:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.206 20:11:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.206 20:11:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:15:42.206 20:11:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.206 20:11:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:15:42.206 20:11:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:15:42.206 20:11:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.206 20:11:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.206 [2024-12-08 20:11:13.994227] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:15:42.206 [2024-12-08 20:11:13.994319] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:42.206 [2024-12-08 20:11:13.994357] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:15:42.206 [2024-12-08 20:11:13.994385] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:42.206 [2024-12-08 20:11:13.994918] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:42.206 [2024-12-08 20:11:13.994999] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:15:42.206 [2024-12-08 20:11:13.995144] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:15:42.206 [2024-12-08 20:11:13.995204] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:15:42.206 [2024-12-08 20:11:13.995426] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:15:42.206 [2024-12-08 20:11:13.995471] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:42.206 [2024-12-08 20:11:13.995784] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:15:42.206 [2024-12-08 20:11:14.004023] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:15:42.206 [2024-12-08 20:11:14.004079] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:15:42.206 [2024-12-08 20:11:14.004457] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:42.206 pt4 00:15:42.206 20:11:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.206 20:11:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:42.206 20:11:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:42.206 20:11:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:42.206 20:11:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:42.206 20:11:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:42.206 20:11:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:42.206 20:11:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:42.206 20:11:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:42.206 20:11:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:42.206 20:11:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:42.206 20:11:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:42.206 20:11:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:42.206 20:11:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.206 20:11:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.206 20:11:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.206 20:11:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:42.206 "name": "raid_bdev1", 00:15:42.206 "uuid": "120ce3ce-ff89-401b-87d2-9ab96d9bf266", 00:15:42.206 "strip_size_kb": 64, 00:15:42.206 "state": "online", 00:15:42.206 "raid_level": "raid5f", 00:15:42.206 "superblock": true, 00:15:42.206 "num_base_bdevs": 4, 00:15:42.206 "num_base_bdevs_discovered": 3, 00:15:42.206 "num_base_bdevs_operational": 3, 00:15:42.206 "base_bdevs_list": [ 00:15:42.206 { 00:15:42.206 "name": null, 00:15:42.206 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:42.206 "is_configured": false, 00:15:42.206 "data_offset": 2048, 00:15:42.206 "data_size": 63488 00:15:42.206 }, 00:15:42.206 { 00:15:42.206 "name": "pt2", 00:15:42.206 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:42.206 "is_configured": true, 00:15:42.206 "data_offset": 2048, 00:15:42.206 "data_size": 63488 00:15:42.206 }, 00:15:42.206 { 00:15:42.206 "name": "pt3", 00:15:42.206 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:42.206 "is_configured": true, 00:15:42.206 "data_offset": 2048, 00:15:42.206 "data_size": 63488 00:15:42.206 }, 00:15:42.206 { 00:15:42.206 "name": "pt4", 00:15:42.206 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:42.206 "is_configured": true, 00:15:42.206 "data_offset": 2048, 00:15:42.206 "data_size": 63488 00:15:42.206 } 00:15:42.206 ] 00:15:42.206 }' 00:15:42.206 20:11:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:42.206 20:11:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.465 20:11:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:15:42.465 20:11:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:15:42.465 20:11:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.465 20:11:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.465 20:11:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.725 20:11:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:15:42.725 20:11:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:42.725 20:11:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.725 20:11:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.725 20:11:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:15:42.725 [2024-12-08 20:11:14.456976] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:42.725 20:11:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.725 20:11:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 120ce3ce-ff89-401b-87d2-9ab96d9bf266 '!=' 120ce3ce-ff89-401b-87d2-9ab96d9bf266 ']' 00:15:42.725 20:11:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 83778 00:15:42.725 20:11:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 83778 ']' 00:15:42.725 20:11:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # kill -0 83778 00:15:42.725 20:11:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # uname 00:15:42.725 20:11:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:42.725 20:11:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83778 00:15:42.725 20:11:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:42.725 20:11:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:42.725 20:11:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83778' 00:15:42.725 killing process with pid 83778 00:15:42.725 20:11:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@973 -- # kill 83778 00:15:42.725 [2024-12-08 20:11:14.528260] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:42.725 [2024-12-08 20:11:14.528341] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:42.725 [2024-12-08 20:11:14.528421] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:42.725 [2024-12-08 20:11:14.528437] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:15:42.725 20:11:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@978 -- # wait 83778 00:15:42.984 [2024-12-08 20:11:14.899189] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:44.364 20:11:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:15:44.364 00:15:44.364 real 0m8.042s 00:15:44.364 user 0m12.671s 00:15:44.364 sys 0m1.429s 00:15:44.364 20:11:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:44.364 ************************************ 00:15:44.364 END TEST raid5f_superblock_test 00:15:44.364 ************************************ 00:15:44.364 20:11:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.364 20:11:16 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:15:44.364 20:11:16 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false true 00:15:44.364 20:11:16 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:15:44.364 20:11:16 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:44.364 20:11:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:44.364 ************************************ 00:15:44.364 START TEST raid5f_rebuild_test 00:15:44.364 ************************************ 00:15:44.364 20:11:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 4 false false true 00:15:44.364 20:11:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:15:44.364 20:11:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:15:44.364 20:11:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:15:44.364 20:11:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:15:44.364 20:11:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:44.364 20:11:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:44.364 20:11:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:44.364 20:11:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:44.364 20:11:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:44.364 20:11:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:44.364 20:11:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:44.364 20:11:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:44.364 20:11:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:44.364 20:11:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:15:44.364 20:11:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:44.364 20:11:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:44.364 20:11:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:15:44.365 20:11:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:44.365 20:11:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:44.365 20:11:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:44.365 20:11:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:44.365 20:11:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:44.365 20:11:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:44.365 20:11:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:44.365 20:11:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:44.365 20:11:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:44.365 20:11:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:15:44.365 20:11:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:15:44.365 20:11:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:15:44.365 20:11:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:15:44.365 20:11:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:15:44.365 20:11:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=84259 00:15:44.365 20:11:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:44.365 20:11:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 84259 00:15:44.365 20:11:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 84259 ']' 00:15:44.365 20:11:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:44.365 20:11:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:44.365 20:11:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:44.365 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:44.365 20:11:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:44.365 20:11:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.365 [2024-12-08 20:11:16.133493] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:15:44.365 [2024-12-08 20:11:16.133697] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:15:44.365 Zero copy mechanism will not be used. 00:15:44.365 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84259 ] 00:15:44.365 [2024-12-08 20:11:16.303330] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:44.624 [2024-12-08 20:11:16.408551] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:44.624 [2024-12-08 20:11:16.594103] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:44.624 [2024-12-08 20:11:16.594234] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:45.194 20:11:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:45.194 20:11:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:15:45.194 20:11:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:45.194 20:11:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:45.194 20:11:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.194 20:11:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.194 BaseBdev1_malloc 00:15:45.194 20:11:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.194 20:11:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:45.194 20:11:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.194 20:11:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.194 [2024-12-08 20:11:16.999063] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:45.194 [2024-12-08 20:11:16.999117] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:45.194 [2024-12-08 20:11:16.999157] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:45.194 [2024-12-08 20:11:16.999168] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:45.194 [2024-12-08 20:11:17.001186] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:45.194 [2024-12-08 20:11:17.001226] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:45.194 BaseBdev1 00:15:45.194 20:11:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.194 20:11:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:45.194 20:11:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:45.194 20:11:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.194 20:11:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.194 BaseBdev2_malloc 00:15:45.194 20:11:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.194 20:11:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:45.194 20:11:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.194 20:11:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.194 [2024-12-08 20:11:17.048381] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:45.194 [2024-12-08 20:11:17.048451] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:45.194 [2024-12-08 20:11:17.048474] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:45.194 [2024-12-08 20:11:17.048485] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:45.195 [2024-12-08 20:11:17.050463] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:45.195 [2024-12-08 20:11:17.050564] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:45.195 BaseBdev2 00:15:45.195 20:11:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.195 20:11:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:45.195 20:11:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:45.195 20:11:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.195 20:11:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.195 BaseBdev3_malloc 00:15:45.195 20:11:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.195 20:11:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:15:45.195 20:11:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.195 20:11:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.195 [2024-12-08 20:11:17.120660] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:15:45.195 [2024-12-08 20:11:17.120707] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:45.195 [2024-12-08 20:11:17.120746] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:45.195 [2024-12-08 20:11:17.120756] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:45.195 [2024-12-08 20:11:17.122725] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:45.195 [2024-12-08 20:11:17.122808] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:45.195 BaseBdev3 00:15:45.195 20:11:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.195 20:11:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:45.195 20:11:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:15:45.195 20:11:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.195 20:11:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.195 BaseBdev4_malloc 00:15:45.195 20:11:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.195 20:11:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:15:45.195 20:11:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.195 20:11:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.195 [2024-12-08 20:11:17.170211] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:15:45.195 [2024-12-08 20:11:17.170289] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:45.195 [2024-12-08 20:11:17.170310] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:45.195 [2024-12-08 20:11:17.170320] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:45.455 [2024-12-08 20:11:17.172498] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:45.455 [2024-12-08 20:11:17.172602] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:15:45.455 BaseBdev4 00:15:45.455 20:11:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.455 20:11:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:45.455 20:11:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.455 20:11:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.455 spare_malloc 00:15:45.455 20:11:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.455 20:11:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:45.455 20:11:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.455 20:11:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.455 spare_delay 00:15:45.455 20:11:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.455 20:11:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:45.455 20:11:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.455 20:11:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.455 [2024-12-08 20:11:17.234738] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:45.455 [2024-12-08 20:11:17.234822] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:45.455 [2024-12-08 20:11:17.234842] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:15:45.455 [2024-12-08 20:11:17.234852] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:45.455 [2024-12-08 20:11:17.236847] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:45.455 [2024-12-08 20:11:17.236887] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:45.455 spare 00:15:45.456 20:11:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.456 20:11:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:15:45.456 20:11:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.456 20:11:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.456 [2024-12-08 20:11:17.246762] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:45.456 [2024-12-08 20:11:17.248485] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:45.456 [2024-12-08 20:11:17.248591] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:45.456 [2024-12-08 20:11:17.248663] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:45.456 [2024-12-08 20:11:17.248747] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:45.456 [2024-12-08 20:11:17.248759] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:15:45.456 [2024-12-08 20:11:17.249006] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:15:45.456 [2024-12-08 20:11:17.256167] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:45.456 [2024-12-08 20:11:17.256185] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:45.456 [2024-12-08 20:11:17.256369] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:45.456 20:11:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.456 20:11:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:15:45.456 20:11:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:45.456 20:11:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:45.456 20:11:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:45.456 20:11:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:45.456 20:11:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:45.456 20:11:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:45.456 20:11:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:45.456 20:11:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:45.456 20:11:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:45.456 20:11:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.456 20:11:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:45.456 20:11:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.456 20:11:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.456 20:11:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.456 20:11:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:45.456 "name": "raid_bdev1", 00:15:45.456 "uuid": "408d78d2-3ffb-4b81-b250-a9c669f8a0ce", 00:15:45.456 "strip_size_kb": 64, 00:15:45.456 "state": "online", 00:15:45.456 "raid_level": "raid5f", 00:15:45.456 "superblock": false, 00:15:45.456 "num_base_bdevs": 4, 00:15:45.456 "num_base_bdevs_discovered": 4, 00:15:45.456 "num_base_bdevs_operational": 4, 00:15:45.456 "base_bdevs_list": [ 00:15:45.456 { 00:15:45.456 "name": "BaseBdev1", 00:15:45.456 "uuid": "6f2d533f-0887-5af6-b00b-14e2c44a7e30", 00:15:45.456 "is_configured": true, 00:15:45.456 "data_offset": 0, 00:15:45.456 "data_size": 65536 00:15:45.456 }, 00:15:45.456 { 00:15:45.456 "name": "BaseBdev2", 00:15:45.456 "uuid": "1be1a709-0f31-56c8-a41a-4a57245def81", 00:15:45.456 "is_configured": true, 00:15:45.456 "data_offset": 0, 00:15:45.456 "data_size": 65536 00:15:45.456 }, 00:15:45.456 { 00:15:45.456 "name": "BaseBdev3", 00:15:45.456 "uuid": "1057b030-2082-5274-8827-de7eaa629d00", 00:15:45.456 "is_configured": true, 00:15:45.456 "data_offset": 0, 00:15:45.456 "data_size": 65536 00:15:45.456 }, 00:15:45.456 { 00:15:45.456 "name": "BaseBdev4", 00:15:45.456 "uuid": "bd4f6f45-73ef-5c3d-a166-8be183aaafdc", 00:15:45.456 "is_configured": true, 00:15:45.456 "data_offset": 0, 00:15:45.456 "data_size": 65536 00:15:45.456 } 00:15:45.456 ] 00:15:45.456 }' 00:15:45.456 20:11:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:45.456 20:11:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.716 20:11:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:45.716 20:11:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:45.716 20:11:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.716 20:11:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.716 [2024-12-08 20:11:17.679856] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:45.977 20:11:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.977 20:11:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=196608 00:15:45.977 20:11:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.977 20:11:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:45.977 20:11:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.977 20:11:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.977 20:11:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.977 20:11:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:15:45.977 20:11:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:15:45.977 20:11:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:15:45.977 20:11:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:15:45.977 20:11:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:15:45.977 20:11:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:45.977 20:11:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:15:45.977 20:11:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:45.977 20:11:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:45.977 20:11:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:45.977 20:11:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:15:45.977 20:11:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:45.977 20:11:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:45.977 20:11:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:15:45.977 [2024-12-08 20:11:17.947245] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:15:46.237 /dev/nbd0 00:15:46.237 20:11:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:46.237 20:11:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:46.237 20:11:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:46.237 20:11:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:15:46.237 20:11:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:46.237 20:11:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:46.237 20:11:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:46.237 20:11:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:15:46.237 20:11:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:46.237 20:11:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:46.237 20:11:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:46.237 1+0 records in 00:15:46.237 1+0 records out 00:15:46.237 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000242689 s, 16.9 MB/s 00:15:46.237 20:11:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:46.237 20:11:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:15:46.237 20:11:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:46.237 20:11:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:46.237 20:11:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:15:46.237 20:11:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:46.237 20:11:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:46.237 20:11:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:15:46.237 20:11:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:15:46.237 20:11:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 192 00:15:46.237 20:11:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:15:46.808 512+0 records in 00:15:46.808 512+0 records out 00:15:46.808 100663296 bytes (101 MB, 96 MiB) copied, 0.457303 s, 220 MB/s 00:15:46.808 20:11:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:46.808 20:11:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:46.808 20:11:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:46.808 20:11:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:46.808 20:11:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:15:46.808 20:11:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:46.809 20:11:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:46.809 20:11:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:46.809 [2024-12-08 20:11:18.669720] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:46.809 20:11:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:46.809 20:11:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:46.809 20:11:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:46.809 20:11:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:46.809 20:11:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:46.809 20:11:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:46.809 20:11:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:46.809 20:11:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:46.809 20:11:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.809 20:11:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.809 [2024-12-08 20:11:18.684897] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:46.809 20:11:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.809 20:11:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:46.809 20:11:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:46.809 20:11:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:46.809 20:11:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:46.809 20:11:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:46.809 20:11:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:46.809 20:11:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:46.809 20:11:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:46.809 20:11:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:46.809 20:11:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:46.809 20:11:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:46.809 20:11:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.809 20:11:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.809 20:11:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.809 20:11:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.809 20:11:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:46.809 "name": "raid_bdev1", 00:15:46.809 "uuid": "408d78d2-3ffb-4b81-b250-a9c669f8a0ce", 00:15:46.809 "strip_size_kb": 64, 00:15:46.809 "state": "online", 00:15:46.809 "raid_level": "raid5f", 00:15:46.809 "superblock": false, 00:15:46.809 "num_base_bdevs": 4, 00:15:46.809 "num_base_bdevs_discovered": 3, 00:15:46.809 "num_base_bdevs_operational": 3, 00:15:46.809 "base_bdevs_list": [ 00:15:46.809 { 00:15:46.809 "name": null, 00:15:46.809 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:46.809 "is_configured": false, 00:15:46.809 "data_offset": 0, 00:15:46.809 "data_size": 65536 00:15:46.809 }, 00:15:46.809 { 00:15:46.809 "name": "BaseBdev2", 00:15:46.809 "uuid": "1be1a709-0f31-56c8-a41a-4a57245def81", 00:15:46.809 "is_configured": true, 00:15:46.809 "data_offset": 0, 00:15:46.809 "data_size": 65536 00:15:46.809 }, 00:15:46.809 { 00:15:46.809 "name": "BaseBdev3", 00:15:46.809 "uuid": "1057b030-2082-5274-8827-de7eaa629d00", 00:15:46.809 "is_configured": true, 00:15:46.809 "data_offset": 0, 00:15:46.809 "data_size": 65536 00:15:46.809 }, 00:15:46.809 { 00:15:46.809 "name": "BaseBdev4", 00:15:46.809 "uuid": "bd4f6f45-73ef-5c3d-a166-8be183aaafdc", 00:15:46.809 "is_configured": true, 00:15:46.809 "data_offset": 0, 00:15:46.809 "data_size": 65536 00:15:46.809 } 00:15:46.809 ] 00:15:46.809 }' 00:15:46.809 20:11:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:46.809 20:11:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.379 20:11:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:47.379 20:11:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.379 20:11:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.379 [2024-12-08 20:11:19.128122] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:47.379 [2024-12-08 20:11:19.142340] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:15:47.379 20:11:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.379 20:11:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:47.379 [2024-12-08 20:11:19.151594] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:48.318 20:11:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:48.318 20:11:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:48.318 20:11:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:48.318 20:11:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:48.318 20:11:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:48.318 20:11:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:48.318 20:11:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:48.318 20:11:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.318 20:11:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.318 20:11:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.318 20:11:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:48.318 "name": "raid_bdev1", 00:15:48.318 "uuid": "408d78d2-3ffb-4b81-b250-a9c669f8a0ce", 00:15:48.318 "strip_size_kb": 64, 00:15:48.318 "state": "online", 00:15:48.318 "raid_level": "raid5f", 00:15:48.318 "superblock": false, 00:15:48.318 "num_base_bdevs": 4, 00:15:48.318 "num_base_bdevs_discovered": 4, 00:15:48.318 "num_base_bdevs_operational": 4, 00:15:48.318 "process": { 00:15:48.318 "type": "rebuild", 00:15:48.318 "target": "spare", 00:15:48.318 "progress": { 00:15:48.318 "blocks": 19200, 00:15:48.318 "percent": 9 00:15:48.318 } 00:15:48.318 }, 00:15:48.318 "base_bdevs_list": [ 00:15:48.318 { 00:15:48.318 "name": "spare", 00:15:48.318 "uuid": "ad65c89a-c200-510c-8c0e-a16c508a4752", 00:15:48.318 "is_configured": true, 00:15:48.318 "data_offset": 0, 00:15:48.318 "data_size": 65536 00:15:48.318 }, 00:15:48.318 { 00:15:48.318 "name": "BaseBdev2", 00:15:48.318 "uuid": "1be1a709-0f31-56c8-a41a-4a57245def81", 00:15:48.318 "is_configured": true, 00:15:48.318 "data_offset": 0, 00:15:48.318 "data_size": 65536 00:15:48.318 }, 00:15:48.318 { 00:15:48.318 "name": "BaseBdev3", 00:15:48.318 "uuid": "1057b030-2082-5274-8827-de7eaa629d00", 00:15:48.318 "is_configured": true, 00:15:48.318 "data_offset": 0, 00:15:48.318 "data_size": 65536 00:15:48.318 }, 00:15:48.318 { 00:15:48.318 "name": "BaseBdev4", 00:15:48.318 "uuid": "bd4f6f45-73ef-5c3d-a166-8be183aaafdc", 00:15:48.318 "is_configured": true, 00:15:48.318 "data_offset": 0, 00:15:48.318 "data_size": 65536 00:15:48.318 } 00:15:48.318 ] 00:15:48.318 }' 00:15:48.318 20:11:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:48.318 20:11:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:48.318 20:11:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:48.318 20:11:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:48.318 20:11:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:48.318 20:11:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.318 20:11:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.318 [2024-12-08 20:11:20.286483] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:48.578 [2024-12-08 20:11:20.357575] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:48.578 [2024-12-08 20:11:20.357651] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:48.578 [2024-12-08 20:11:20.357668] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:48.578 [2024-12-08 20:11:20.357677] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:48.578 20:11:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.578 20:11:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:48.578 20:11:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:48.578 20:11:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:48.578 20:11:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:48.578 20:11:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:48.578 20:11:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:48.578 20:11:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:48.578 20:11:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:48.578 20:11:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:48.578 20:11:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:48.578 20:11:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:48.578 20:11:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:48.578 20:11:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.578 20:11:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.578 20:11:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.578 20:11:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:48.578 "name": "raid_bdev1", 00:15:48.578 "uuid": "408d78d2-3ffb-4b81-b250-a9c669f8a0ce", 00:15:48.578 "strip_size_kb": 64, 00:15:48.578 "state": "online", 00:15:48.578 "raid_level": "raid5f", 00:15:48.578 "superblock": false, 00:15:48.578 "num_base_bdevs": 4, 00:15:48.578 "num_base_bdevs_discovered": 3, 00:15:48.578 "num_base_bdevs_operational": 3, 00:15:48.578 "base_bdevs_list": [ 00:15:48.578 { 00:15:48.578 "name": null, 00:15:48.578 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:48.578 "is_configured": false, 00:15:48.578 "data_offset": 0, 00:15:48.578 "data_size": 65536 00:15:48.578 }, 00:15:48.578 { 00:15:48.578 "name": "BaseBdev2", 00:15:48.578 "uuid": "1be1a709-0f31-56c8-a41a-4a57245def81", 00:15:48.578 "is_configured": true, 00:15:48.578 "data_offset": 0, 00:15:48.578 "data_size": 65536 00:15:48.578 }, 00:15:48.578 { 00:15:48.578 "name": "BaseBdev3", 00:15:48.578 "uuid": "1057b030-2082-5274-8827-de7eaa629d00", 00:15:48.578 "is_configured": true, 00:15:48.578 "data_offset": 0, 00:15:48.578 "data_size": 65536 00:15:48.578 }, 00:15:48.578 { 00:15:48.578 "name": "BaseBdev4", 00:15:48.578 "uuid": "bd4f6f45-73ef-5c3d-a166-8be183aaafdc", 00:15:48.578 "is_configured": true, 00:15:48.578 "data_offset": 0, 00:15:48.578 "data_size": 65536 00:15:48.578 } 00:15:48.578 ] 00:15:48.578 }' 00:15:48.578 20:11:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:48.578 20:11:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.148 20:11:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:49.148 20:11:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:49.148 20:11:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:49.148 20:11:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:49.148 20:11:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:49.148 20:11:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.148 20:11:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:49.148 20:11:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.148 20:11:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.148 20:11:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.148 20:11:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:49.148 "name": "raid_bdev1", 00:15:49.148 "uuid": "408d78d2-3ffb-4b81-b250-a9c669f8a0ce", 00:15:49.148 "strip_size_kb": 64, 00:15:49.148 "state": "online", 00:15:49.148 "raid_level": "raid5f", 00:15:49.148 "superblock": false, 00:15:49.148 "num_base_bdevs": 4, 00:15:49.148 "num_base_bdevs_discovered": 3, 00:15:49.148 "num_base_bdevs_operational": 3, 00:15:49.148 "base_bdevs_list": [ 00:15:49.148 { 00:15:49.148 "name": null, 00:15:49.148 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:49.148 "is_configured": false, 00:15:49.148 "data_offset": 0, 00:15:49.148 "data_size": 65536 00:15:49.148 }, 00:15:49.148 { 00:15:49.148 "name": "BaseBdev2", 00:15:49.148 "uuid": "1be1a709-0f31-56c8-a41a-4a57245def81", 00:15:49.148 "is_configured": true, 00:15:49.148 "data_offset": 0, 00:15:49.148 "data_size": 65536 00:15:49.148 }, 00:15:49.148 { 00:15:49.148 "name": "BaseBdev3", 00:15:49.148 "uuid": "1057b030-2082-5274-8827-de7eaa629d00", 00:15:49.148 "is_configured": true, 00:15:49.148 "data_offset": 0, 00:15:49.148 "data_size": 65536 00:15:49.148 }, 00:15:49.148 { 00:15:49.148 "name": "BaseBdev4", 00:15:49.148 "uuid": "bd4f6f45-73ef-5c3d-a166-8be183aaafdc", 00:15:49.148 "is_configured": true, 00:15:49.148 "data_offset": 0, 00:15:49.148 "data_size": 65536 00:15:49.148 } 00:15:49.148 ] 00:15:49.148 }' 00:15:49.148 20:11:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:49.148 20:11:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:49.148 20:11:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:49.148 20:11:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:49.148 20:11:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:49.148 20:11:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.148 20:11:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.148 [2024-12-08 20:11:20.981959] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:49.148 [2024-12-08 20:11:20.996442] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b820 00:15:49.148 20:11:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.148 20:11:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:49.148 [2024-12-08 20:11:21.004991] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:50.105 20:11:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:50.105 20:11:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:50.105 20:11:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:50.105 20:11:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:50.105 20:11:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:50.105 20:11:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.105 20:11:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:50.105 20:11:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.105 20:11:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.105 20:11:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.105 20:11:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:50.105 "name": "raid_bdev1", 00:15:50.105 "uuid": "408d78d2-3ffb-4b81-b250-a9c669f8a0ce", 00:15:50.105 "strip_size_kb": 64, 00:15:50.105 "state": "online", 00:15:50.105 "raid_level": "raid5f", 00:15:50.105 "superblock": false, 00:15:50.105 "num_base_bdevs": 4, 00:15:50.105 "num_base_bdevs_discovered": 4, 00:15:50.105 "num_base_bdevs_operational": 4, 00:15:50.105 "process": { 00:15:50.105 "type": "rebuild", 00:15:50.105 "target": "spare", 00:15:50.105 "progress": { 00:15:50.105 "blocks": 19200, 00:15:50.105 "percent": 9 00:15:50.105 } 00:15:50.105 }, 00:15:50.105 "base_bdevs_list": [ 00:15:50.105 { 00:15:50.105 "name": "spare", 00:15:50.105 "uuid": "ad65c89a-c200-510c-8c0e-a16c508a4752", 00:15:50.105 "is_configured": true, 00:15:50.105 "data_offset": 0, 00:15:50.105 "data_size": 65536 00:15:50.105 }, 00:15:50.105 { 00:15:50.105 "name": "BaseBdev2", 00:15:50.105 "uuid": "1be1a709-0f31-56c8-a41a-4a57245def81", 00:15:50.105 "is_configured": true, 00:15:50.105 "data_offset": 0, 00:15:50.105 "data_size": 65536 00:15:50.105 }, 00:15:50.105 { 00:15:50.105 "name": "BaseBdev3", 00:15:50.105 "uuid": "1057b030-2082-5274-8827-de7eaa629d00", 00:15:50.105 "is_configured": true, 00:15:50.105 "data_offset": 0, 00:15:50.105 "data_size": 65536 00:15:50.105 }, 00:15:50.105 { 00:15:50.105 "name": "BaseBdev4", 00:15:50.105 "uuid": "bd4f6f45-73ef-5c3d-a166-8be183aaafdc", 00:15:50.105 "is_configured": true, 00:15:50.105 "data_offset": 0, 00:15:50.105 "data_size": 65536 00:15:50.105 } 00:15:50.105 ] 00:15:50.105 }' 00:15:50.105 20:11:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:50.365 20:11:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:50.366 20:11:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:50.366 20:11:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:50.366 20:11:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:15:50.366 20:11:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:15:50.366 20:11:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:15:50.366 20:11:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=604 00:15:50.366 20:11:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:50.366 20:11:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:50.366 20:11:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:50.366 20:11:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:50.366 20:11:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:50.366 20:11:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:50.366 20:11:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.366 20:11:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.366 20:11:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.366 20:11:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:50.366 20:11:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.366 20:11:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:50.366 "name": "raid_bdev1", 00:15:50.366 "uuid": "408d78d2-3ffb-4b81-b250-a9c669f8a0ce", 00:15:50.366 "strip_size_kb": 64, 00:15:50.366 "state": "online", 00:15:50.366 "raid_level": "raid5f", 00:15:50.366 "superblock": false, 00:15:50.366 "num_base_bdevs": 4, 00:15:50.366 "num_base_bdevs_discovered": 4, 00:15:50.366 "num_base_bdevs_operational": 4, 00:15:50.366 "process": { 00:15:50.366 "type": "rebuild", 00:15:50.366 "target": "spare", 00:15:50.366 "progress": { 00:15:50.366 "blocks": 21120, 00:15:50.366 "percent": 10 00:15:50.366 } 00:15:50.366 }, 00:15:50.366 "base_bdevs_list": [ 00:15:50.366 { 00:15:50.366 "name": "spare", 00:15:50.366 "uuid": "ad65c89a-c200-510c-8c0e-a16c508a4752", 00:15:50.366 "is_configured": true, 00:15:50.366 "data_offset": 0, 00:15:50.366 "data_size": 65536 00:15:50.366 }, 00:15:50.366 { 00:15:50.366 "name": "BaseBdev2", 00:15:50.366 "uuid": "1be1a709-0f31-56c8-a41a-4a57245def81", 00:15:50.366 "is_configured": true, 00:15:50.366 "data_offset": 0, 00:15:50.366 "data_size": 65536 00:15:50.366 }, 00:15:50.366 { 00:15:50.366 "name": "BaseBdev3", 00:15:50.366 "uuid": "1057b030-2082-5274-8827-de7eaa629d00", 00:15:50.366 "is_configured": true, 00:15:50.366 "data_offset": 0, 00:15:50.366 "data_size": 65536 00:15:50.366 }, 00:15:50.366 { 00:15:50.366 "name": "BaseBdev4", 00:15:50.366 "uuid": "bd4f6f45-73ef-5c3d-a166-8be183aaafdc", 00:15:50.366 "is_configured": true, 00:15:50.366 "data_offset": 0, 00:15:50.366 "data_size": 65536 00:15:50.366 } 00:15:50.366 ] 00:15:50.366 }' 00:15:50.366 20:11:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:50.366 20:11:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:50.366 20:11:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:50.366 20:11:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:50.366 20:11:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:51.749 20:11:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:51.749 20:11:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:51.749 20:11:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:51.749 20:11:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:51.749 20:11:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:51.749 20:11:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:51.749 20:11:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.749 20:11:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.749 20:11:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:51.749 20:11:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.749 20:11:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.749 20:11:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:51.749 "name": "raid_bdev1", 00:15:51.749 "uuid": "408d78d2-3ffb-4b81-b250-a9c669f8a0ce", 00:15:51.749 "strip_size_kb": 64, 00:15:51.749 "state": "online", 00:15:51.749 "raid_level": "raid5f", 00:15:51.749 "superblock": false, 00:15:51.749 "num_base_bdevs": 4, 00:15:51.749 "num_base_bdevs_discovered": 4, 00:15:51.749 "num_base_bdevs_operational": 4, 00:15:51.749 "process": { 00:15:51.749 "type": "rebuild", 00:15:51.749 "target": "spare", 00:15:51.749 "progress": { 00:15:51.749 "blocks": 44160, 00:15:51.749 "percent": 22 00:15:51.749 } 00:15:51.749 }, 00:15:51.749 "base_bdevs_list": [ 00:15:51.749 { 00:15:51.749 "name": "spare", 00:15:51.749 "uuid": "ad65c89a-c200-510c-8c0e-a16c508a4752", 00:15:51.749 "is_configured": true, 00:15:51.749 "data_offset": 0, 00:15:51.749 "data_size": 65536 00:15:51.749 }, 00:15:51.749 { 00:15:51.749 "name": "BaseBdev2", 00:15:51.749 "uuid": "1be1a709-0f31-56c8-a41a-4a57245def81", 00:15:51.749 "is_configured": true, 00:15:51.749 "data_offset": 0, 00:15:51.749 "data_size": 65536 00:15:51.749 }, 00:15:51.749 { 00:15:51.749 "name": "BaseBdev3", 00:15:51.749 "uuid": "1057b030-2082-5274-8827-de7eaa629d00", 00:15:51.749 "is_configured": true, 00:15:51.749 "data_offset": 0, 00:15:51.749 "data_size": 65536 00:15:51.749 }, 00:15:51.749 { 00:15:51.749 "name": "BaseBdev4", 00:15:51.749 "uuid": "bd4f6f45-73ef-5c3d-a166-8be183aaafdc", 00:15:51.749 "is_configured": true, 00:15:51.749 "data_offset": 0, 00:15:51.749 "data_size": 65536 00:15:51.749 } 00:15:51.749 ] 00:15:51.749 }' 00:15:51.749 20:11:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:51.749 20:11:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:51.749 20:11:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:51.749 20:11:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:51.749 20:11:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:52.689 20:11:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:52.689 20:11:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:52.689 20:11:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:52.689 20:11:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:52.689 20:11:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:52.689 20:11:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:52.689 20:11:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:52.689 20:11:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.689 20:11:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:52.689 20:11:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.689 20:11:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.689 20:11:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:52.689 "name": "raid_bdev1", 00:15:52.689 "uuid": "408d78d2-3ffb-4b81-b250-a9c669f8a0ce", 00:15:52.689 "strip_size_kb": 64, 00:15:52.689 "state": "online", 00:15:52.689 "raid_level": "raid5f", 00:15:52.689 "superblock": false, 00:15:52.689 "num_base_bdevs": 4, 00:15:52.689 "num_base_bdevs_discovered": 4, 00:15:52.689 "num_base_bdevs_operational": 4, 00:15:52.689 "process": { 00:15:52.689 "type": "rebuild", 00:15:52.689 "target": "spare", 00:15:52.689 "progress": { 00:15:52.689 "blocks": 65280, 00:15:52.689 "percent": 33 00:15:52.689 } 00:15:52.689 }, 00:15:52.689 "base_bdevs_list": [ 00:15:52.689 { 00:15:52.689 "name": "spare", 00:15:52.689 "uuid": "ad65c89a-c200-510c-8c0e-a16c508a4752", 00:15:52.689 "is_configured": true, 00:15:52.689 "data_offset": 0, 00:15:52.689 "data_size": 65536 00:15:52.689 }, 00:15:52.689 { 00:15:52.689 "name": "BaseBdev2", 00:15:52.689 "uuid": "1be1a709-0f31-56c8-a41a-4a57245def81", 00:15:52.689 "is_configured": true, 00:15:52.689 "data_offset": 0, 00:15:52.689 "data_size": 65536 00:15:52.689 }, 00:15:52.689 { 00:15:52.689 "name": "BaseBdev3", 00:15:52.689 "uuid": "1057b030-2082-5274-8827-de7eaa629d00", 00:15:52.689 "is_configured": true, 00:15:52.689 "data_offset": 0, 00:15:52.689 "data_size": 65536 00:15:52.689 }, 00:15:52.689 { 00:15:52.689 "name": "BaseBdev4", 00:15:52.689 "uuid": "bd4f6f45-73ef-5c3d-a166-8be183aaafdc", 00:15:52.689 "is_configured": true, 00:15:52.689 "data_offset": 0, 00:15:52.689 "data_size": 65536 00:15:52.689 } 00:15:52.689 ] 00:15:52.689 }' 00:15:52.689 20:11:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:52.689 20:11:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:52.689 20:11:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:52.689 20:11:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:52.689 20:11:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:54.070 20:11:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:54.070 20:11:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:54.070 20:11:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:54.070 20:11:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:54.070 20:11:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:54.070 20:11:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:54.070 20:11:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:54.070 20:11:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:54.070 20:11:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.070 20:11:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.070 20:11:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.070 20:11:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:54.070 "name": "raid_bdev1", 00:15:54.070 "uuid": "408d78d2-3ffb-4b81-b250-a9c669f8a0ce", 00:15:54.070 "strip_size_kb": 64, 00:15:54.070 "state": "online", 00:15:54.070 "raid_level": "raid5f", 00:15:54.070 "superblock": false, 00:15:54.070 "num_base_bdevs": 4, 00:15:54.070 "num_base_bdevs_discovered": 4, 00:15:54.070 "num_base_bdevs_operational": 4, 00:15:54.070 "process": { 00:15:54.070 "type": "rebuild", 00:15:54.070 "target": "spare", 00:15:54.070 "progress": { 00:15:54.070 "blocks": 88320, 00:15:54.070 "percent": 44 00:15:54.070 } 00:15:54.070 }, 00:15:54.070 "base_bdevs_list": [ 00:15:54.070 { 00:15:54.070 "name": "spare", 00:15:54.070 "uuid": "ad65c89a-c200-510c-8c0e-a16c508a4752", 00:15:54.070 "is_configured": true, 00:15:54.070 "data_offset": 0, 00:15:54.070 "data_size": 65536 00:15:54.070 }, 00:15:54.070 { 00:15:54.070 "name": "BaseBdev2", 00:15:54.070 "uuid": "1be1a709-0f31-56c8-a41a-4a57245def81", 00:15:54.070 "is_configured": true, 00:15:54.070 "data_offset": 0, 00:15:54.070 "data_size": 65536 00:15:54.070 }, 00:15:54.070 { 00:15:54.070 "name": "BaseBdev3", 00:15:54.070 "uuid": "1057b030-2082-5274-8827-de7eaa629d00", 00:15:54.070 "is_configured": true, 00:15:54.070 "data_offset": 0, 00:15:54.070 "data_size": 65536 00:15:54.070 }, 00:15:54.070 { 00:15:54.070 "name": "BaseBdev4", 00:15:54.070 "uuid": "bd4f6f45-73ef-5c3d-a166-8be183aaafdc", 00:15:54.070 "is_configured": true, 00:15:54.070 "data_offset": 0, 00:15:54.070 "data_size": 65536 00:15:54.070 } 00:15:54.070 ] 00:15:54.070 }' 00:15:54.070 20:11:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:54.070 20:11:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:54.070 20:11:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:54.070 20:11:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:54.070 20:11:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:55.008 20:11:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:55.008 20:11:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:55.008 20:11:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:55.008 20:11:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:55.008 20:11:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:55.008 20:11:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:55.009 20:11:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.009 20:11:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:55.009 20:11:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.009 20:11:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.009 20:11:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.009 20:11:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:55.009 "name": "raid_bdev1", 00:15:55.009 "uuid": "408d78d2-3ffb-4b81-b250-a9c669f8a0ce", 00:15:55.009 "strip_size_kb": 64, 00:15:55.009 "state": "online", 00:15:55.009 "raid_level": "raid5f", 00:15:55.009 "superblock": false, 00:15:55.009 "num_base_bdevs": 4, 00:15:55.009 "num_base_bdevs_discovered": 4, 00:15:55.009 "num_base_bdevs_operational": 4, 00:15:55.009 "process": { 00:15:55.009 "type": "rebuild", 00:15:55.009 "target": "spare", 00:15:55.009 "progress": { 00:15:55.009 "blocks": 109440, 00:15:55.009 "percent": 55 00:15:55.009 } 00:15:55.009 }, 00:15:55.009 "base_bdevs_list": [ 00:15:55.009 { 00:15:55.009 "name": "spare", 00:15:55.009 "uuid": "ad65c89a-c200-510c-8c0e-a16c508a4752", 00:15:55.009 "is_configured": true, 00:15:55.009 "data_offset": 0, 00:15:55.009 "data_size": 65536 00:15:55.009 }, 00:15:55.009 { 00:15:55.009 "name": "BaseBdev2", 00:15:55.009 "uuid": "1be1a709-0f31-56c8-a41a-4a57245def81", 00:15:55.009 "is_configured": true, 00:15:55.009 "data_offset": 0, 00:15:55.009 "data_size": 65536 00:15:55.009 }, 00:15:55.009 { 00:15:55.009 "name": "BaseBdev3", 00:15:55.009 "uuid": "1057b030-2082-5274-8827-de7eaa629d00", 00:15:55.009 "is_configured": true, 00:15:55.009 "data_offset": 0, 00:15:55.009 "data_size": 65536 00:15:55.009 }, 00:15:55.009 { 00:15:55.009 "name": "BaseBdev4", 00:15:55.009 "uuid": "bd4f6f45-73ef-5c3d-a166-8be183aaafdc", 00:15:55.009 "is_configured": true, 00:15:55.009 "data_offset": 0, 00:15:55.009 "data_size": 65536 00:15:55.009 } 00:15:55.009 ] 00:15:55.009 }' 00:15:55.009 20:11:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:55.009 20:11:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:55.009 20:11:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:55.009 20:11:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:55.009 20:11:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:55.946 20:11:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:55.946 20:11:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:55.946 20:11:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:55.946 20:11:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:55.946 20:11:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:55.946 20:11:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:55.946 20:11:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.946 20:11:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:55.946 20:11:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.946 20:11:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.946 20:11:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.205 20:11:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:56.205 "name": "raid_bdev1", 00:15:56.205 "uuid": "408d78d2-3ffb-4b81-b250-a9c669f8a0ce", 00:15:56.205 "strip_size_kb": 64, 00:15:56.205 "state": "online", 00:15:56.205 "raid_level": "raid5f", 00:15:56.205 "superblock": false, 00:15:56.205 "num_base_bdevs": 4, 00:15:56.205 "num_base_bdevs_discovered": 4, 00:15:56.205 "num_base_bdevs_operational": 4, 00:15:56.205 "process": { 00:15:56.205 "type": "rebuild", 00:15:56.205 "target": "spare", 00:15:56.205 "progress": { 00:15:56.205 "blocks": 130560, 00:15:56.205 "percent": 66 00:15:56.205 } 00:15:56.205 }, 00:15:56.205 "base_bdevs_list": [ 00:15:56.205 { 00:15:56.205 "name": "spare", 00:15:56.205 "uuid": "ad65c89a-c200-510c-8c0e-a16c508a4752", 00:15:56.205 "is_configured": true, 00:15:56.205 "data_offset": 0, 00:15:56.205 "data_size": 65536 00:15:56.205 }, 00:15:56.205 { 00:15:56.205 "name": "BaseBdev2", 00:15:56.205 "uuid": "1be1a709-0f31-56c8-a41a-4a57245def81", 00:15:56.205 "is_configured": true, 00:15:56.205 "data_offset": 0, 00:15:56.205 "data_size": 65536 00:15:56.205 }, 00:15:56.205 { 00:15:56.205 "name": "BaseBdev3", 00:15:56.205 "uuid": "1057b030-2082-5274-8827-de7eaa629d00", 00:15:56.205 "is_configured": true, 00:15:56.205 "data_offset": 0, 00:15:56.205 "data_size": 65536 00:15:56.205 }, 00:15:56.205 { 00:15:56.205 "name": "BaseBdev4", 00:15:56.205 "uuid": "bd4f6f45-73ef-5c3d-a166-8be183aaafdc", 00:15:56.205 "is_configured": true, 00:15:56.205 "data_offset": 0, 00:15:56.205 "data_size": 65536 00:15:56.205 } 00:15:56.205 ] 00:15:56.205 }' 00:15:56.205 20:11:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:56.205 20:11:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:56.205 20:11:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:56.205 20:11:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:56.205 20:11:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:57.140 20:11:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:57.140 20:11:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:57.140 20:11:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:57.140 20:11:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:57.140 20:11:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:57.140 20:11:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:57.140 20:11:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.140 20:11:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:57.140 20:11:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.140 20:11:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.140 20:11:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.140 20:11:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:57.140 "name": "raid_bdev1", 00:15:57.140 "uuid": "408d78d2-3ffb-4b81-b250-a9c669f8a0ce", 00:15:57.140 "strip_size_kb": 64, 00:15:57.140 "state": "online", 00:15:57.140 "raid_level": "raid5f", 00:15:57.140 "superblock": false, 00:15:57.140 "num_base_bdevs": 4, 00:15:57.140 "num_base_bdevs_discovered": 4, 00:15:57.140 "num_base_bdevs_operational": 4, 00:15:57.140 "process": { 00:15:57.140 "type": "rebuild", 00:15:57.140 "target": "spare", 00:15:57.140 "progress": { 00:15:57.140 "blocks": 153600, 00:15:57.140 "percent": 78 00:15:57.140 } 00:15:57.140 }, 00:15:57.140 "base_bdevs_list": [ 00:15:57.140 { 00:15:57.140 "name": "spare", 00:15:57.140 "uuid": "ad65c89a-c200-510c-8c0e-a16c508a4752", 00:15:57.140 "is_configured": true, 00:15:57.140 "data_offset": 0, 00:15:57.140 "data_size": 65536 00:15:57.140 }, 00:15:57.140 { 00:15:57.140 "name": "BaseBdev2", 00:15:57.140 "uuid": "1be1a709-0f31-56c8-a41a-4a57245def81", 00:15:57.140 "is_configured": true, 00:15:57.140 "data_offset": 0, 00:15:57.140 "data_size": 65536 00:15:57.140 }, 00:15:57.140 { 00:15:57.140 "name": "BaseBdev3", 00:15:57.140 "uuid": "1057b030-2082-5274-8827-de7eaa629d00", 00:15:57.140 "is_configured": true, 00:15:57.140 "data_offset": 0, 00:15:57.140 "data_size": 65536 00:15:57.140 }, 00:15:57.140 { 00:15:57.140 "name": "BaseBdev4", 00:15:57.140 "uuid": "bd4f6f45-73ef-5c3d-a166-8be183aaafdc", 00:15:57.140 "is_configured": true, 00:15:57.140 "data_offset": 0, 00:15:57.140 "data_size": 65536 00:15:57.140 } 00:15:57.140 ] 00:15:57.140 }' 00:15:57.140 20:11:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:57.399 20:11:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:57.399 20:11:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:57.399 20:11:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:57.399 20:11:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:58.336 20:11:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:58.336 20:11:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:58.336 20:11:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:58.336 20:11:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:58.336 20:11:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:58.336 20:11:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:58.336 20:11:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:58.336 20:11:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.336 20:11:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.336 20:11:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:58.336 20:11:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.336 20:11:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:58.336 "name": "raid_bdev1", 00:15:58.336 "uuid": "408d78d2-3ffb-4b81-b250-a9c669f8a0ce", 00:15:58.336 "strip_size_kb": 64, 00:15:58.336 "state": "online", 00:15:58.336 "raid_level": "raid5f", 00:15:58.336 "superblock": false, 00:15:58.336 "num_base_bdevs": 4, 00:15:58.336 "num_base_bdevs_discovered": 4, 00:15:58.336 "num_base_bdevs_operational": 4, 00:15:58.336 "process": { 00:15:58.336 "type": "rebuild", 00:15:58.336 "target": "spare", 00:15:58.336 "progress": { 00:15:58.336 "blocks": 174720, 00:15:58.336 "percent": 88 00:15:58.336 } 00:15:58.336 }, 00:15:58.336 "base_bdevs_list": [ 00:15:58.336 { 00:15:58.336 "name": "spare", 00:15:58.336 "uuid": "ad65c89a-c200-510c-8c0e-a16c508a4752", 00:15:58.336 "is_configured": true, 00:15:58.336 "data_offset": 0, 00:15:58.336 "data_size": 65536 00:15:58.336 }, 00:15:58.336 { 00:15:58.336 "name": "BaseBdev2", 00:15:58.337 "uuid": "1be1a709-0f31-56c8-a41a-4a57245def81", 00:15:58.337 "is_configured": true, 00:15:58.337 "data_offset": 0, 00:15:58.337 "data_size": 65536 00:15:58.337 }, 00:15:58.337 { 00:15:58.337 "name": "BaseBdev3", 00:15:58.337 "uuid": "1057b030-2082-5274-8827-de7eaa629d00", 00:15:58.337 "is_configured": true, 00:15:58.337 "data_offset": 0, 00:15:58.337 "data_size": 65536 00:15:58.337 }, 00:15:58.337 { 00:15:58.337 "name": "BaseBdev4", 00:15:58.337 "uuid": "bd4f6f45-73ef-5c3d-a166-8be183aaafdc", 00:15:58.337 "is_configured": true, 00:15:58.337 "data_offset": 0, 00:15:58.337 "data_size": 65536 00:15:58.337 } 00:15:58.337 ] 00:15:58.337 }' 00:15:58.337 20:11:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:58.337 20:11:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:58.337 20:11:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:58.594 20:11:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:58.594 20:11:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:59.532 20:11:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:59.532 20:11:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:59.532 20:11:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:59.532 20:11:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:59.532 20:11:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:59.532 20:11:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:59.532 20:11:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.532 20:11:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:59.532 20:11:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.532 20:11:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.532 [2024-12-08 20:11:31.351594] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:59.532 [2024-12-08 20:11:31.351731] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:59.532 [2024-12-08 20:11:31.351818] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:59.532 20:11:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.532 20:11:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:59.532 "name": "raid_bdev1", 00:15:59.532 "uuid": "408d78d2-3ffb-4b81-b250-a9c669f8a0ce", 00:15:59.532 "strip_size_kb": 64, 00:15:59.532 "state": "online", 00:15:59.532 "raid_level": "raid5f", 00:15:59.532 "superblock": false, 00:15:59.532 "num_base_bdevs": 4, 00:15:59.532 "num_base_bdevs_discovered": 4, 00:15:59.532 "num_base_bdevs_operational": 4, 00:15:59.532 "base_bdevs_list": [ 00:15:59.532 { 00:15:59.532 "name": "spare", 00:15:59.532 "uuid": "ad65c89a-c200-510c-8c0e-a16c508a4752", 00:15:59.532 "is_configured": true, 00:15:59.532 "data_offset": 0, 00:15:59.532 "data_size": 65536 00:15:59.532 }, 00:15:59.532 { 00:15:59.532 "name": "BaseBdev2", 00:15:59.532 "uuid": "1be1a709-0f31-56c8-a41a-4a57245def81", 00:15:59.532 "is_configured": true, 00:15:59.532 "data_offset": 0, 00:15:59.532 "data_size": 65536 00:15:59.532 }, 00:15:59.532 { 00:15:59.532 "name": "BaseBdev3", 00:15:59.532 "uuid": "1057b030-2082-5274-8827-de7eaa629d00", 00:15:59.532 "is_configured": true, 00:15:59.532 "data_offset": 0, 00:15:59.532 "data_size": 65536 00:15:59.532 }, 00:15:59.532 { 00:15:59.532 "name": "BaseBdev4", 00:15:59.532 "uuid": "bd4f6f45-73ef-5c3d-a166-8be183aaafdc", 00:15:59.532 "is_configured": true, 00:15:59.532 "data_offset": 0, 00:15:59.532 "data_size": 65536 00:15:59.532 } 00:15:59.532 ] 00:15:59.532 }' 00:15:59.532 20:11:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:59.532 20:11:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:59.532 20:11:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:59.532 20:11:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:59.532 20:11:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:15:59.532 20:11:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:59.532 20:11:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:59.532 20:11:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:59.532 20:11:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:59.532 20:11:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:59.532 20:11:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:59.532 20:11:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.532 20:11:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.532 20:11:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.532 20:11:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.532 20:11:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:59.532 "name": "raid_bdev1", 00:15:59.532 "uuid": "408d78d2-3ffb-4b81-b250-a9c669f8a0ce", 00:15:59.532 "strip_size_kb": 64, 00:15:59.532 "state": "online", 00:15:59.532 "raid_level": "raid5f", 00:15:59.532 "superblock": false, 00:15:59.532 "num_base_bdevs": 4, 00:15:59.532 "num_base_bdevs_discovered": 4, 00:15:59.532 "num_base_bdevs_operational": 4, 00:15:59.532 "base_bdevs_list": [ 00:15:59.532 { 00:15:59.532 "name": "spare", 00:15:59.532 "uuid": "ad65c89a-c200-510c-8c0e-a16c508a4752", 00:15:59.532 "is_configured": true, 00:15:59.532 "data_offset": 0, 00:15:59.532 "data_size": 65536 00:15:59.532 }, 00:15:59.532 { 00:15:59.532 "name": "BaseBdev2", 00:15:59.532 "uuid": "1be1a709-0f31-56c8-a41a-4a57245def81", 00:15:59.532 "is_configured": true, 00:15:59.532 "data_offset": 0, 00:15:59.532 "data_size": 65536 00:15:59.532 }, 00:15:59.532 { 00:15:59.532 "name": "BaseBdev3", 00:15:59.532 "uuid": "1057b030-2082-5274-8827-de7eaa629d00", 00:15:59.532 "is_configured": true, 00:15:59.532 "data_offset": 0, 00:15:59.532 "data_size": 65536 00:15:59.532 }, 00:15:59.532 { 00:15:59.532 "name": "BaseBdev4", 00:15:59.532 "uuid": "bd4f6f45-73ef-5c3d-a166-8be183aaafdc", 00:15:59.532 "is_configured": true, 00:15:59.532 "data_offset": 0, 00:15:59.532 "data_size": 65536 00:15:59.532 } 00:15:59.532 ] 00:15:59.532 }' 00:15:59.532 20:11:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:59.793 20:11:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:59.793 20:11:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:59.793 20:11:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:59.793 20:11:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:15:59.793 20:11:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:59.793 20:11:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:59.793 20:11:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:59.793 20:11:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:59.793 20:11:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:59.793 20:11:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:59.793 20:11:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:59.793 20:11:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:59.793 20:11:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:59.793 20:11:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.793 20:11:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:59.793 20:11:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.793 20:11:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.793 20:11:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.793 20:11:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:59.793 "name": "raid_bdev1", 00:15:59.793 "uuid": "408d78d2-3ffb-4b81-b250-a9c669f8a0ce", 00:15:59.793 "strip_size_kb": 64, 00:15:59.793 "state": "online", 00:15:59.793 "raid_level": "raid5f", 00:15:59.793 "superblock": false, 00:15:59.793 "num_base_bdevs": 4, 00:15:59.793 "num_base_bdevs_discovered": 4, 00:15:59.793 "num_base_bdevs_operational": 4, 00:15:59.793 "base_bdevs_list": [ 00:15:59.793 { 00:15:59.793 "name": "spare", 00:15:59.793 "uuid": "ad65c89a-c200-510c-8c0e-a16c508a4752", 00:15:59.793 "is_configured": true, 00:15:59.793 "data_offset": 0, 00:15:59.793 "data_size": 65536 00:15:59.793 }, 00:15:59.793 { 00:15:59.793 "name": "BaseBdev2", 00:15:59.793 "uuid": "1be1a709-0f31-56c8-a41a-4a57245def81", 00:15:59.793 "is_configured": true, 00:15:59.793 "data_offset": 0, 00:15:59.793 "data_size": 65536 00:15:59.793 }, 00:15:59.793 { 00:15:59.793 "name": "BaseBdev3", 00:15:59.793 "uuid": "1057b030-2082-5274-8827-de7eaa629d00", 00:15:59.793 "is_configured": true, 00:15:59.793 "data_offset": 0, 00:15:59.793 "data_size": 65536 00:15:59.793 }, 00:15:59.793 { 00:15:59.793 "name": "BaseBdev4", 00:15:59.793 "uuid": "bd4f6f45-73ef-5c3d-a166-8be183aaafdc", 00:15:59.793 "is_configured": true, 00:15:59.793 "data_offset": 0, 00:15:59.793 "data_size": 65536 00:15:59.793 } 00:15:59.793 ] 00:15:59.793 }' 00:15:59.793 20:11:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:59.793 20:11:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.362 20:11:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:00.362 20:11:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.362 20:11:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.362 [2024-12-08 20:11:32.053729] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:00.362 [2024-12-08 20:11:32.053810] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:00.362 [2024-12-08 20:11:32.053913] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:00.362 [2024-12-08 20:11:32.054071] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:00.362 [2024-12-08 20:11:32.054122] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:00.362 20:11:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.362 20:11:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:00.362 20:11:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:16:00.362 20:11:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.362 20:11:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.362 20:11:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.362 20:11:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:00.362 20:11:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:00.362 20:11:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:16:00.362 20:11:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:16:00.362 20:11:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:00.362 20:11:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:16:00.362 20:11:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:00.362 20:11:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:00.362 20:11:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:00.362 20:11:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:16:00.362 20:11:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:00.362 20:11:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:00.362 20:11:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:16:00.362 /dev/nbd0 00:16:00.362 20:11:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:00.362 20:11:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:00.362 20:11:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:00.362 20:11:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:16:00.362 20:11:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:00.362 20:11:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:00.362 20:11:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:00.362 20:11:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:16:00.362 20:11:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:00.362 20:11:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:00.362 20:11:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:00.362 1+0 records in 00:16:00.362 1+0 records out 00:16:00.362 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000263965 s, 15.5 MB/s 00:16:00.362 20:11:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:00.362 20:11:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:16:00.362 20:11:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:00.623 20:11:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:00.623 20:11:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:16:00.623 20:11:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:00.623 20:11:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:00.623 20:11:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:16:00.623 /dev/nbd1 00:16:00.623 20:11:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:00.623 20:11:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:00.623 20:11:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:16:00.623 20:11:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:16:00.623 20:11:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:00.623 20:11:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:00.623 20:11:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:16:00.623 20:11:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:16:00.623 20:11:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:00.623 20:11:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:00.623 20:11:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:00.623 1+0 records in 00:16:00.623 1+0 records out 00:16:00.623 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000275226 s, 14.9 MB/s 00:16:00.623 20:11:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:00.623 20:11:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:16:00.623 20:11:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:00.623 20:11:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:00.623 20:11:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:16:00.623 20:11:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:00.623 20:11:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:00.623 20:11:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:16:00.883 20:11:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:16:00.883 20:11:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:00.883 20:11:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:00.883 20:11:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:00.883 20:11:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:16:00.883 20:11:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:00.883 20:11:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:01.142 20:11:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:01.142 20:11:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:01.142 20:11:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:01.142 20:11:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:01.142 20:11:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:01.142 20:11:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:01.142 20:11:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:01.142 20:11:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:01.142 20:11:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:01.143 20:11:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:01.417 20:11:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:01.417 20:11:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:01.417 20:11:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:01.417 20:11:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:01.417 20:11:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:01.417 20:11:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:01.417 20:11:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:01.417 20:11:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:01.417 20:11:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:16:01.417 20:11:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 84259 00:16:01.417 20:11:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 84259 ']' 00:16:01.417 20:11:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 84259 00:16:01.417 20:11:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:16:01.417 20:11:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:01.417 20:11:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84259 00:16:01.417 20:11:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:01.417 20:11:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:01.417 20:11:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84259' 00:16:01.417 killing process with pid 84259 00:16:01.417 Received shutdown signal, test time was about 60.000000 seconds 00:16:01.417 00:16:01.417 Latency(us) 00:16:01.417 [2024-12-08T20:11:33.396Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:01.418 [2024-12-08T20:11:33.396Z] =================================================================================================================== 00:16:01.418 [2024-12-08T20:11:33.396Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:01.418 20:11:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@973 -- # kill 84259 00:16:01.418 [2024-12-08 20:11:33.194537] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:01.418 20:11:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@978 -- # wait 84259 00:16:01.678 [2024-12-08 20:11:33.651576] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:03.062 20:11:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:16:03.062 00:16:03.062 real 0m18.662s 00:16:03.062 user 0m22.401s 00:16:03.062 sys 0m2.145s 00:16:03.062 20:11:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:03.062 20:11:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.062 ************************************ 00:16:03.062 END TEST raid5f_rebuild_test 00:16:03.062 ************************************ 00:16:03.062 20:11:34 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false true 00:16:03.062 20:11:34 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:16:03.062 20:11:34 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:03.062 20:11:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:03.062 ************************************ 00:16:03.062 START TEST raid5f_rebuild_test_sb 00:16:03.062 ************************************ 00:16:03.062 20:11:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 4 true false true 00:16:03.062 20:11:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:16:03.062 20:11:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:16:03.062 20:11:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:16:03.062 20:11:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:16:03.062 20:11:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:03.062 20:11:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:03.062 20:11:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:03.062 20:11:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:03.062 20:11:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:03.062 20:11:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:03.062 20:11:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:03.062 20:11:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:03.062 20:11:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:03.062 20:11:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:16:03.062 20:11:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:03.062 20:11:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:03.062 20:11:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:16:03.062 20:11:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:03.062 20:11:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:03.062 20:11:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:03.062 20:11:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:03.062 20:11:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:03.062 20:11:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:03.062 20:11:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:03.062 20:11:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:03.062 20:11:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:03.062 20:11:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:16:03.062 20:11:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:16:03.062 20:11:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:16:03.062 20:11:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:16:03.062 20:11:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:16:03.062 20:11:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:16:03.062 20:11:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=84765 00:16:03.062 20:11:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:03.062 20:11:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 84765 00:16:03.062 20:11:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 84765 ']' 00:16:03.062 20:11:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:03.062 20:11:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:03.062 20:11:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:03.062 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:03.062 20:11:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:03.062 20:11:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.062 [2024-12-08 20:11:34.869263] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:16:03.062 [2024-12-08 20:11:34.869461] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:16:03.062 Zero copy mechanism will not be used. 00:16:03.062 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84765 ] 00:16:03.321 [2024-12-08 20:11:35.039725] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:03.321 [2024-12-08 20:11:35.141731] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:03.581 [2024-12-08 20:11:35.322365] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:03.581 [2024-12-08 20:11:35.322494] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:03.841 20:11:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:03.841 20:11:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:16:03.841 20:11:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:03.841 20:11:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:03.841 20:11:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.841 20:11:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.841 BaseBdev1_malloc 00:16:03.841 20:11:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.841 20:11:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:03.841 20:11:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.841 20:11:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.841 [2024-12-08 20:11:35.727986] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:03.841 [2024-12-08 20:11:35.728079] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:03.841 [2024-12-08 20:11:35.728120] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:03.841 [2024-12-08 20:11:35.728131] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:03.841 [2024-12-08 20:11:35.730168] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:03.841 [2024-12-08 20:11:35.730207] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:03.841 BaseBdev1 00:16:03.841 20:11:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.841 20:11:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:03.841 20:11:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:03.841 20:11:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.841 20:11:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.841 BaseBdev2_malloc 00:16:03.841 20:11:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.841 20:11:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:03.841 20:11:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.841 20:11:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.841 [2024-12-08 20:11:35.781417] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:03.841 [2024-12-08 20:11:35.781487] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:03.841 [2024-12-08 20:11:35.781507] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:03.841 [2024-12-08 20:11:35.781517] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:03.841 [2024-12-08 20:11:35.783493] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:03.841 [2024-12-08 20:11:35.783595] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:03.841 BaseBdev2 00:16:03.841 20:11:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.841 20:11:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:03.841 20:11:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:03.841 20:11:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.841 20:11:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.105 BaseBdev3_malloc 00:16:04.105 20:11:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.105 20:11:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:16:04.106 20:11:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.106 20:11:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.106 [2024-12-08 20:11:35.865770] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:16:04.106 [2024-12-08 20:11:35.865820] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:04.106 [2024-12-08 20:11:35.865855] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:04.106 [2024-12-08 20:11:35.865866] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:04.106 [2024-12-08 20:11:35.867836] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:04.106 [2024-12-08 20:11:35.867873] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:04.106 BaseBdev3 00:16:04.106 20:11:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.106 20:11:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:04.106 20:11:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:16:04.106 20:11:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.106 20:11:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.106 BaseBdev4_malloc 00:16:04.106 20:11:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.106 20:11:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:16:04.106 20:11:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.106 20:11:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.106 [2024-12-08 20:11:35.917302] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:16:04.106 [2024-12-08 20:11:35.917367] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:04.106 [2024-12-08 20:11:35.917386] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:04.106 [2024-12-08 20:11:35.917396] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:04.106 [2024-12-08 20:11:35.919445] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:04.106 [2024-12-08 20:11:35.919518] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:16:04.106 BaseBdev4 00:16:04.106 20:11:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.106 20:11:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:16:04.106 20:11:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.106 20:11:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.106 spare_malloc 00:16:04.106 20:11:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.106 20:11:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:04.106 20:11:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.106 20:11:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.106 spare_delay 00:16:04.106 20:11:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.106 20:11:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:04.106 20:11:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.106 20:11:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.106 [2024-12-08 20:11:35.983466] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:04.106 [2024-12-08 20:11:35.983513] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:04.106 [2024-12-08 20:11:35.983545] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:16:04.106 [2024-12-08 20:11:35.983555] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:04.106 [2024-12-08 20:11:35.985515] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:04.106 [2024-12-08 20:11:35.985553] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:04.106 spare 00:16:04.106 20:11:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.106 20:11:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:16:04.106 20:11:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.106 20:11:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.106 [2024-12-08 20:11:35.995517] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:04.106 [2024-12-08 20:11:35.997227] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:04.106 [2024-12-08 20:11:35.997365] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:04.106 [2024-12-08 20:11:35.997429] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:04.106 [2024-12-08 20:11:35.997625] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:04.106 [2024-12-08 20:11:35.997640] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:04.106 [2024-12-08 20:11:35.997878] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:04.106 [2024-12-08 20:11:36.004972] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:04.106 [2024-12-08 20:11:36.004992] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:04.106 [2024-12-08 20:11:36.005163] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:04.106 20:11:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.106 20:11:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:04.106 20:11:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:04.106 20:11:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:04.106 20:11:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:04.106 20:11:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:04.106 20:11:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:04.106 20:11:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:04.106 20:11:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:04.106 20:11:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:04.106 20:11:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:04.106 20:11:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.106 20:11:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.106 20:11:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:04.106 20:11:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.106 20:11:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.106 20:11:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:04.106 "name": "raid_bdev1", 00:16:04.106 "uuid": "1a995962-3e61-42bf-a8d6-5a6d479691d3", 00:16:04.106 "strip_size_kb": 64, 00:16:04.106 "state": "online", 00:16:04.106 "raid_level": "raid5f", 00:16:04.106 "superblock": true, 00:16:04.106 "num_base_bdevs": 4, 00:16:04.106 "num_base_bdevs_discovered": 4, 00:16:04.106 "num_base_bdevs_operational": 4, 00:16:04.106 "base_bdevs_list": [ 00:16:04.106 { 00:16:04.106 "name": "BaseBdev1", 00:16:04.106 "uuid": "5c6065d8-f5f9-55e7-803e-12a54943c1e1", 00:16:04.106 "is_configured": true, 00:16:04.106 "data_offset": 2048, 00:16:04.106 "data_size": 63488 00:16:04.106 }, 00:16:04.106 { 00:16:04.106 "name": "BaseBdev2", 00:16:04.106 "uuid": "7b5b4a8d-2221-5172-9571-898c659f3824", 00:16:04.106 "is_configured": true, 00:16:04.106 "data_offset": 2048, 00:16:04.106 "data_size": 63488 00:16:04.106 }, 00:16:04.106 { 00:16:04.106 "name": "BaseBdev3", 00:16:04.106 "uuid": "2240c4a1-b724-5f9c-a577-511c5b404982", 00:16:04.106 "is_configured": true, 00:16:04.106 "data_offset": 2048, 00:16:04.106 "data_size": 63488 00:16:04.106 }, 00:16:04.106 { 00:16:04.106 "name": "BaseBdev4", 00:16:04.106 "uuid": "214c9c3a-b082-5d0e-aef0-a9deb3c3a7d8", 00:16:04.106 "is_configured": true, 00:16:04.106 "data_offset": 2048, 00:16:04.106 "data_size": 63488 00:16:04.106 } 00:16:04.106 ] 00:16:04.106 }' 00:16:04.106 20:11:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:04.106 20:11:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.699 20:11:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:04.699 20:11:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:04.699 20:11:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.699 20:11:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.699 [2024-12-08 20:11:36.429023] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:04.699 20:11:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.699 20:11:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=190464 00:16:04.699 20:11:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.699 20:11:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:04.699 20:11:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.699 20:11:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.699 20:11:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.699 20:11:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:16:04.699 20:11:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:16:04.699 20:11:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:16:04.699 20:11:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:16:04.699 20:11:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:16:04.699 20:11:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:04.699 20:11:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:16:04.699 20:11:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:04.699 20:11:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:04.700 20:11:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:04.700 20:11:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:16:04.700 20:11:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:04.700 20:11:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:04.700 20:11:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:16:04.700 [2024-12-08 20:11:36.660478] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:16:04.972 /dev/nbd0 00:16:04.972 20:11:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:04.972 20:11:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:04.972 20:11:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:04.972 20:11:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:16:04.972 20:11:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:04.972 20:11:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:04.972 20:11:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:04.972 20:11:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:16:04.972 20:11:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:04.972 20:11:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:04.972 20:11:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:04.972 1+0 records in 00:16:04.972 1+0 records out 00:16:04.972 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000173872 s, 23.6 MB/s 00:16:04.972 20:11:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:04.972 20:11:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:16:04.972 20:11:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:04.972 20:11:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:04.972 20:11:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:16:04.972 20:11:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:04.972 20:11:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:04.972 20:11:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:16:04.972 20:11:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:16:04.972 20:11:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 192 00:16:04.972 20:11:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:16:05.240 496+0 records in 00:16:05.240 496+0 records out 00:16:05.240 97517568 bytes (98 MB, 93 MiB) copied, 0.436355 s, 223 MB/s 00:16:05.240 20:11:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:05.240 20:11:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:05.240 20:11:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:05.240 20:11:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:05.240 20:11:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:16:05.240 20:11:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:05.240 20:11:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:05.500 [2024-12-08 20:11:37.354686] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:05.500 20:11:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:05.500 20:11:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:05.500 20:11:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:05.500 20:11:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:05.500 20:11:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:05.500 20:11:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:05.500 20:11:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:16:05.500 20:11:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:16:05.500 20:11:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:05.500 20:11:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.500 20:11:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:05.500 [2024-12-08 20:11:37.389206] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:05.500 20:11:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.500 20:11:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:05.500 20:11:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:05.500 20:11:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:05.500 20:11:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:05.500 20:11:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:05.500 20:11:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:05.500 20:11:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:05.500 20:11:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:05.500 20:11:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:05.500 20:11:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:05.500 20:11:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:05.500 20:11:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:05.500 20:11:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.500 20:11:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:05.500 20:11:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.500 20:11:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:05.500 "name": "raid_bdev1", 00:16:05.500 "uuid": "1a995962-3e61-42bf-a8d6-5a6d479691d3", 00:16:05.500 "strip_size_kb": 64, 00:16:05.500 "state": "online", 00:16:05.500 "raid_level": "raid5f", 00:16:05.500 "superblock": true, 00:16:05.500 "num_base_bdevs": 4, 00:16:05.500 "num_base_bdevs_discovered": 3, 00:16:05.500 "num_base_bdevs_operational": 3, 00:16:05.500 "base_bdevs_list": [ 00:16:05.500 { 00:16:05.500 "name": null, 00:16:05.500 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:05.500 "is_configured": false, 00:16:05.500 "data_offset": 0, 00:16:05.500 "data_size": 63488 00:16:05.500 }, 00:16:05.500 { 00:16:05.500 "name": "BaseBdev2", 00:16:05.500 "uuid": "7b5b4a8d-2221-5172-9571-898c659f3824", 00:16:05.500 "is_configured": true, 00:16:05.500 "data_offset": 2048, 00:16:05.500 "data_size": 63488 00:16:05.500 }, 00:16:05.500 { 00:16:05.500 "name": "BaseBdev3", 00:16:05.500 "uuid": "2240c4a1-b724-5f9c-a577-511c5b404982", 00:16:05.500 "is_configured": true, 00:16:05.500 "data_offset": 2048, 00:16:05.500 "data_size": 63488 00:16:05.500 }, 00:16:05.500 { 00:16:05.500 "name": "BaseBdev4", 00:16:05.500 "uuid": "214c9c3a-b082-5d0e-aef0-a9deb3c3a7d8", 00:16:05.500 "is_configured": true, 00:16:05.500 "data_offset": 2048, 00:16:05.500 "data_size": 63488 00:16:05.500 } 00:16:05.500 ] 00:16:05.500 }' 00:16:05.500 20:11:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:05.500 20:11:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.070 20:11:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:06.070 20:11:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.070 20:11:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.070 [2024-12-08 20:11:37.840441] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:06.070 [2024-12-08 20:11:37.856133] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002aa50 00:16:06.070 20:11:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.070 20:11:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:06.070 [2024-12-08 20:11:37.864968] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:07.010 20:11:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:07.010 20:11:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:07.010 20:11:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:07.010 20:11:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:07.010 20:11:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:07.010 20:11:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:07.010 20:11:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:07.010 20:11:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.010 20:11:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:07.010 20:11:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.010 20:11:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:07.010 "name": "raid_bdev1", 00:16:07.010 "uuid": "1a995962-3e61-42bf-a8d6-5a6d479691d3", 00:16:07.010 "strip_size_kb": 64, 00:16:07.010 "state": "online", 00:16:07.010 "raid_level": "raid5f", 00:16:07.010 "superblock": true, 00:16:07.010 "num_base_bdevs": 4, 00:16:07.010 "num_base_bdevs_discovered": 4, 00:16:07.010 "num_base_bdevs_operational": 4, 00:16:07.010 "process": { 00:16:07.010 "type": "rebuild", 00:16:07.010 "target": "spare", 00:16:07.011 "progress": { 00:16:07.011 "blocks": 19200, 00:16:07.011 "percent": 10 00:16:07.011 } 00:16:07.011 }, 00:16:07.011 "base_bdevs_list": [ 00:16:07.011 { 00:16:07.011 "name": "spare", 00:16:07.011 "uuid": "e565cbc7-26f0-5749-8cb4-512b29c63953", 00:16:07.011 "is_configured": true, 00:16:07.011 "data_offset": 2048, 00:16:07.011 "data_size": 63488 00:16:07.011 }, 00:16:07.011 { 00:16:07.011 "name": "BaseBdev2", 00:16:07.011 "uuid": "7b5b4a8d-2221-5172-9571-898c659f3824", 00:16:07.011 "is_configured": true, 00:16:07.011 "data_offset": 2048, 00:16:07.011 "data_size": 63488 00:16:07.011 }, 00:16:07.011 { 00:16:07.011 "name": "BaseBdev3", 00:16:07.011 "uuid": "2240c4a1-b724-5f9c-a577-511c5b404982", 00:16:07.011 "is_configured": true, 00:16:07.011 "data_offset": 2048, 00:16:07.011 "data_size": 63488 00:16:07.011 }, 00:16:07.011 { 00:16:07.011 "name": "BaseBdev4", 00:16:07.011 "uuid": "214c9c3a-b082-5d0e-aef0-a9deb3c3a7d8", 00:16:07.011 "is_configured": true, 00:16:07.011 "data_offset": 2048, 00:16:07.011 "data_size": 63488 00:16:07.011 } 00:16:07.011 ] 00:16:07.011 }' 00:16:07.011 20:11:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:07.011 20:11:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:07.011 20:11:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:07.270 20:11:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:07.270 20:11:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:07.270 20:11:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.270 20:11:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:07.270 [2024-12-08 20:11:38.999737] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:07.270 [2024-12-08 20:11:39.070695] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:07.270 [2024-12-08 20:11:39.070771] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:07.270 [2024-12-08 20:11:39.070787] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:07.270 [2024-12-08 20:11:39.070796] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:07.270 20:11:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.270 20:11:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:07.270 20:11:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:07.270 20:11:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:07.270 20:11:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:07.270 20:11:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:07.270 20:11:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:07.270 20:11:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:07.270 20:11:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:07.270 20:11:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:07.270 20:11:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:07.270 20:11:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:07.270 20:11:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.270 20:11:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:07.270 20:11:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:07.270 20:11:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.270 20:11:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:07.270 "name": "raid_bdev1", 00:16:07.270 "uuid": "1a995962-3e61-42bf-a8d6-5a6d479691d3", 00:16:07.270 "strip_size_kb": 64, 00:16:07.270 "state": "online", 00:16:07.270 "raid_level": "raid5f", 00:16:07.270 "superblock": true, 00:16:07.270 "num_base_bdevs": 4, 00:16:07.270 "num_base_bdevs_discovered": 3, 00:16:07.270 "num_base_bdevs_operational": 3, 00:16:07.270 "base_bdevs_list": [ 00:16:07.270 { 00:16:07.270 "name": null, 00:16:07.270 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:07.270 "is_configured": false, 00:16:07.270 "data_offset": 0, 00:16:07.270 "data_size": 63488 00:16:07.270 }, 00:16:07.270 { 00:16:07.270 "name": "BaseBdev2", 00:16:07.270 "uuid": "7b5b4a8d-2221-5172-9571-898c659f3824", 00:16:07.270 "is_configured": true, 00:16:07.270 "data_offset": 2048, 00:16:07.270 "data_size": 63488 00:16:07.270 }, 00:16:07.270 { 00:16:07.270 "name": "BaseBdev3", 00:16:07.270 "uuid": "2240c4a1-b724-5f9c-a577-511c5b404982", 00:16:07.270 "is_configured": true, 00:16:07.270 "data_offset": 2048, 00:16:07.270 "data_size": 63488 00:16:07.270 }, 00:16:07.270 { 00:16:07.270 "name": "BaseBdev4", 00:16:07.270 "uuid": "214c9c3a-b082-5d0e-aef0-a9deb3c3a7d8", 00:16:07.270 "is_configured": true, 00:16:07.270 "data_offset": 2048, 00:16:07.270 "data_size": 63488 00:16:07.270 } 00:16:07.270 ] 00:16:07.270 }' 00:16:07.270 20:11:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:07.270 20:11:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:07.843 20:11:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:07.843 20:11:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:07.843 20:11:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:07.843 20:11:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:07.843 20:11:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:07.843 20:11:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:07.843 20:11:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.843 20:11:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:07.843 20:11:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:07.843 20:11:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.843 20:11:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:07.843 "name": "raid_bdev1", 00:16:07.843 "uuid": "1a995962-3e61-42bf-a8d6-5a6d479691d3", 00:16:07.843 "strip_size_kb": 64, 00:16:07.843 "state": "online", 00:16:07.843 "raid_level": "raid5f", 00:16:07.843 "superblock": true, 00:16:07.843 "num_base_bdevs": 4, 00:16:07.843 "num_base_bdevs_discovered": 3, 00:16:07.843 "num_base_bdevs_operational": 3, 00:16:07.843 "base_bdevs_list": [ 00:16:07.843 { 00:16:07.843 "name": null, 00:16:07.843 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:07.843 "is_configured": false, 00:16:07.843 "data_offset": 0, 00:16:07.843 "data_size": 63488 00:16:07.843 }, 00:16:07.843 { 00:16:07.843 "name": "BaseBdev2", 00:16:07.843 "uuid": "7b5b4a8d-2221-5172-9571-898c659f3824", 00:16:07.843 "is_configured": true, 00:16:07.843 "data_offset": 2048, 00:16:07.843 "data_size": 63488 00:16:07.843 }, 00:16:07.843 { 00:16:07.843 "name": "BaseBdev3", 00:16:07.843 "uuid": "2240c4a1-b724-5f9c-a577-511c5b404982", 00:16:07.843 "is_configured": true, 00:16:07.843 "data_offset": 2048, 00:16:07.843 "data_size": 63488 00:16:07.843 }, 00:16:07.843 { 00:16:07.843 "name": "BaseBdev4", 00:16:07.843 "uuid": "214c9c3a-b082-5d0e-aef0-a9deb3c3a7d8", 00:16:07.843 "is_configured": true, 00:16:07.843 "data_offset": 2048, 00:16:07.843 "data_size": 63488 00:16:07.843 } 00:16:07.843 ] 00:16:07.843 }' 00:16:07.843 20:11:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:07.843 20:11:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:07.843 20:11:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:07.843 20:11:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:07.843 20:11:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:07.843 20:11:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.843 20:11:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:07.843 [2024-12-08 20:11:39.651096] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:07.843 [2024-12-08 20:11:39.665388] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002ab20 00:16:07.843 20:11:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.843 20:11:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:07.843 [2024-12-08 20:11:39.673880] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:08.781 20:11:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:08.781 20:11:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:08.781 20:11:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:08.782 20:11:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:08.782 20:11:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:08.782 20:11:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.782 20:11:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.782 20:11:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:08.782 20:11:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.782 20:11:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.782 20:11:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:08.782 "name": "raid_bdev1", 00:16:08.782 "uuid": "1a995962-3e61-42bf-a8d6-5a6d479691d3", 00:16:08.782 "strip_size_kb": 64, 00:16:08.782 "state": "online", 00:16:08.782 "raid_level": "raid5f", 00:16:08.782 "superblock": true, 00:16:08.782 "num_base_bdevs": 4, 00:16:08.782 "num_base_bdevs_discovered": 4, 00:16:08.782 "num_base_bdevs_operational": 4, 00:16:08.782 "process": { 00:16:08.782 "type": "rebuild", 00:16:08.782 "target": "spare", 00:16:08.782 "progress": { 00:16:08.782 "blocks": 19200, 00:16:08.782 "percent": 10 00:16:08.782 } 00:16:08.782 }, 00:16:08.782 "base_bdevs_list": [ 00:16:08.782 { 00:16:08.782 "name": "spare", 00:16:08.782 "uuid": "e565cbc7-26f0-5749-8cb4-512b29c63953", 00:16:08.782 "is_configured": true, 00:16:08.782 "data_offset": 2048, 00:16:08.782 "data_size": 63488 00:16:08.782 }, 00:16:08.782 { 00:16:08.782 "name": "BaseBdev2", 00:16:08.782 "uuid": "7b5b4a8d-2221-5172-9571-898c659f3824", 00:16:08.782 "is_configured": true, 00:16:08.782 "data_offset": 2048, 00:16:08.782 "data_size": 63488 00:16:08.782 }, 00:16:08.782 { 00:16:08.782 "name": "BaseBdev3", 00:16:08.782 "uuid": "2240c4a1-b724-5f9c-a577-511c5b404982", 00:16:08.782 "is_configured": true, 00:16:08.782 "data_offset": 2048, 00:16:08.782 "data_size": 63488 00:16:08.782 }, 00:16:08.782 { 00:16:08.782 "name": "BaseBdev4", 00:16:08.782 "uuid": "214c9c3a-b082-5d0e-aef0-a9deb3c3a7d8", 00:16:08.782 "is_configured": true, 00:16:08.782 "data_offset": 2048, 00:16:08.782 "data_size": 63488 00:16:08.782 } 00:16:08.782 ] 00:16:08.782 }' 00:16:08.782 20:11:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:09.042 20:11:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:09.042 20:11:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:09.042 20:11:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:09.042 20:11:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:16:09.042 20:11:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:16:09.042 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:16:09.042 20:11:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:16:09.042 20:11:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:16:09.042 20:11:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=622 00:16:09.042 20:11:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:09.042 20:11:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:09.042 20:11:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:09.042 20:11:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:09.042 20:11:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:09.042 20:11:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:09.042 20:11:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:09.042 20:11:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.042 20:11:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.042 20:11:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:09.042 20:11:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.042 20:11:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:09.042 "name": "raid_bdev1", 00:16:09.042 "uuid": "1a995962-3e61-42bf-a8d6-5a6d479691d3", 00:16:09.042 "strip_size_kb": 64, 00:16:09.042 "state": "online", 00:16:09.042 "raid_level": "raid5f", 00:16:09.042 "superblock": true, 00:16:09.042 "num_base_bdevs": 4, 00:16:09.042 "num_base_bdevs_discovered": 4, 00:16:09.042 "num_base_bdevs_operational": 4, 00:16:09.042 "process": { 00:16:09.042 "type": "rebuild", 00:16:09.042 "target": "spare", 00:16:09.042 "progress": { 00:16:09.042 "blocks": 21120, 00:16:09.042 "percent": 11 00:16:09.042 } 00:16:09.042 }, 00:16:09.042 "base_bdevs_list": [ 00:16:09.042 { 00:16:09.042 "name": "spare", 00:16:09.042 "uuid": "e565cbc7-26f0-5749-8cb4-512b29c63953", 00:16:09.042 "is_configured": true, 00:16:09.042 "data_offset": 2048, 00:16:09.042 "data_size": 63488 00:16:09.042 }, 00:16:09.042 { 00:16:09.042 "name": "BaseBdev2", 00:16:09.042 "uuid": "7b5b4a8d-2221-5172-9571-898c659f3824", 00:16:09.042 "is_configured": true, 00:16:09.042 "data_offset": 2048, 00:16:09.042 "data_size": 63488 00:16:09.042 }, 00:16:09.042 { 00:16:09.042 "name": "BaseBdev3", 00:16:09.042 "uuid": "2240c4a1-b724-5f9c-a577-511c5b404982", 00:16:09.042 "is_configured": true, 00:16:09.042 "data_offset": 2048, 00:16:09.042 "data_size": 63488 00:16:09.042 }, 00:16:09.042 { 00:16:09.042 "name": "BaseBdev4", 00:16:09.042 "uuid": "214c9c3a-b082-5d0e-aef0-a9deb3c3a7d8", 00:16:09.042 "is_configured": true, 00:16:09.042 "data_offset": 2048, 00:16:09.042 "data_size": 63488 00:16:09.042 } 00:16:09.042 ] 00:16:09.042 }' 00:16:09.042 20:11:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:09.042 20:11:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:09.042 20:11:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:09.042 20:11:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:09.042 20:11:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:09.979 20:11:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:09.979 20:11:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:09.979 20:11:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:09.979 20:11:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:09.979 20:11:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:09.979 20:11:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:09.979 20:11:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:09.979 20:11:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.979 20:11:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:09.979 20:11:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:10.238 20:11:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.238 20:11:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:10.238 "name": "raid_bdev1", 00:16:10.238 "uuid": "1a995962-3e61-42bf-a8d6-5a6d479691d3", 00:16:10.239 "strip_size_kb": 64, 00:16:10.239 "state": "online", 00:16:10.239 "raid_level": "raid5f", 00:16:10.239 "superblock": true, 00:16:10.239 "num_base_bdevs": 4, 00:16:10.239 "num_base_bdevs_discovered": 4, 00:16:10.239 "num_base_bdevs_operational": 4, 00:16:10.239 "process": { 00:16:10.239 "type": "rebuild", 00:16:10.239 "target": "spare", 00:16:10.239 "progress": { 00:16:10.239 "blocks": 42240, 00:16:10.239 "percent": 22 00:16:10.239 } 00:16:10.239 }, 00:16:10.239 "base_bdevs_list": [ 00:16:10.239 { 00:16:10.239 "name": "spare", 00:16:10.239 "uuid": "e565cbc7-26f0-5749-8cb4-512b29c63953", 00:16:10.239 "is_configured": true, 00:16:10.239 "data_offset": 2048, 00:16:10.239 "data_size": 63488 00:16:10.239 }, 00:16:10.239 { 00:16:10.239 "name": "BaseBdev2", 00:16:10.239 "uuid": "7b5b4a8d-2221-5172-9571-898c659f3824", 00:16:10.239 "is_configured": true, 00:16:10.239 "data_offset": 2048, 00:16:10.239 "data_size": 63488 00:16:10.239 }, 00:16:10.239 { 00:16:10.239 "name": "BaseBdev3", 00:16:10.239 "uuid": "2240c4a1-b724-5f9c-a577-511c5b404982", 00:16:10.239 "is_configured": true, 00:16:10.239 "data_offset": 2048, 00:16:10.239 "data_size": 63488 00:16:10.239 }, 00:16:10.239 { 00:16:10.239 "name": "BaseBdev4", 00:16:10.239 "uuid": "214c9c3a-b082-5d0e-aef0-a9deb3c3a7d8", 00:16:10.239 "is_configured": true, 00:16:10.239 "data_offset": 2048, 00:16:10.239 "data_size": 63488 00:16:10.239 } 00:16:10.239 ] 00:16:10.239 }' 00:16:10.239 20:11:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:10.239 20:11:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:10.239 20:11:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:10.239 20:11:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:10.239 20:11:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:11.178 20:11:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:11.178 20:11:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:11.178 20:11:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:11.178 20:11:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:11.178 20:11:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:11.178 20:11:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:11.178 20:11:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:11.178 20:11:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:11.178 20:11:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.178 20:11:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:11.178 20:11:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.178 20:11:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:11.178 "name": "raid_bdev1", 00:16:11.178 "uuid": "1a995962-3e61-42bf-a8d6-5a6d479691d3", 00:16:11.178 "strip_size_kb": 64, 00:16:11.178 "state": "online", 00:16:11.178 "raid_level": "raid5f", 00:16:11.178 "superblock": true, 00:16:11.178 "num_base_bdevs": 4, 00:16:11.178 "num_base_bdevs_discovered": 4, 00:16:11.178 "num_base_bdevs_operational": 4, 00:16:11.178 "process": { 00:16:11.178 "type": "rebuild", 00:16:11.178 "target": "spare", 00:16:11.178 "progress": { 00:16:11.178 "blocks": 65280, 00:16:11.178 "percent": 34 00:16:11.178 } 00:16:11.178 }, 00:16:11.178 "base_bdevs_list": [ 00:16:11.178 { 00:16:11.178 "name": "spare", 00:16:11.178 "uuid": "e565cbc7-26f0-5749-8cb4-512b29c63953", 00:16:11.178 "is_configured": true, 00:16:11.178 "data_offset": 2048, 00:16:11.178 "data_size": 63488 00:16:11.178 }, 00:16:11.178 { 00:16:11.178 "name": "BaseBdev2", 00:16:11.178 "uuid": "7b5b4a8d-2221-5172-9571-898c659f3824", 00:16:11.178 "is_configured": true, 00:16:11.178 "data_offset": 2048, 00:16:11.178 "data_size": 63488 00:16:11.178 }, 00:16:11.178 { 00:16:11.178 "name": "BaseBdev3", 00:16:11.178 "uuid": "2240c4a1-b724-5f9c-a577-511c5b404982", 00:16:11.178 "is_configured": true, 00:16:11.178 "data_offset": 2048, 00:16:11.178 "data_size": 63488 00:16:11.178 }, 00:16:11.178 { 00:16:11.178 "name": "BaseBdev4", 00:16:11.178 "uuid": "214c9c3a-b082-5d0e-aef0-a9deb3c3a7d8", 00:16:11.178 "is_configured": true, 00:16:11.178 "data_offset": 2048, 00:16:11.178 "data_size": 63488 00:16:11.178 } 00:16:11.178 ] 00:16:11.178 }' 00:16:11.178 20:11:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:11.438 20:11:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:11.438 20:11:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:11.438 20:11:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:11.438 20:11:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:12.377 20:11:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:12.377 20:11:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:12.377 20:11:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:12.377 20:11:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:12.377 20:11:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:12.377 20:11:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:12.377 20:11:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:12.377 20:11:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:12.377 20:11:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.377 20:11:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.377 20:11:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.377 20:11:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:12.377 "name": "raid_bdev1", 00:16:12.377 "uuid": "1a995962-3e61-42bf-a8d6-5a6d479691d3", 00:16:12.377 "strip_size_kb": 64, 00:16:12.377 "state": "online", 00:16:12.377 "raid_level": "raid5f", 00:16:12.377 "superblock": true, 00:16:12.377 "num_base_bdevs": 4, 00:16:12.377 "num_base_bdevs_discovered": 4, 00:16:12.377 "num_base_bdevs_operational": 4, 00:16:12.377 "process": { 00:16:12.377 "type": "rebuild", 00:16:12.377 "target": "spare", 00:16:12.377 "progress": { 00:16:12.377 "blocks": 86400, 00:16:12.377 "percent": 45 00:16:12.377 } 00:16:12.377 }, 00:16:12.377 "base_bdevs_list": [ 00:16:12.377 { 00:16:12.377 "name": "spare", 00:16:12.377 "uuid": "e565cbc7-26f0-5749-8cb4-512b29c63953", 00:16:12.377 "is_configured": true, 00:16:12.377 "data_offset": 2048, 00:16:12.377 "data_size": 63488 00:16:12.377 }, 00:16:12.377 { 00:16:12.377 "name": "BaseBdev2", 00:16:12.377 "uuid": "7b5b4a8d-2221-5172-9571-898c659f3824", 00:16:12.377 "is_configured": true, 00:16:12.377 "data_offset": 2048, 00:16:12.377 "data_size": 63488 00:16:12.377 }, 00:16:12.377 { 00:16:12.377 "name": "BaseBdev3", 00:16:12.377 "uuid": "2240c4a1-b724-5f9c-a577-511c5b404982", 00:16:12.377 "is_configured": true, 00:16:12.377 "data_offset": 2048, 00:16:12.377 "data_size": 63488 00:16:12.377 }, 00:16:12.378 { 00:16:12.378 "name": "BaseBdev4", 00:16:12.378 "uuid": "214c9c3a-b082-5d0e-aef0-a9deb3c3a7d8", 00:16:12.378 "is_configured": true, 00:16:12.378 "data_offset": 2048, 00:16:12.378 "data_size": 63488 00:16:12.378 } 00:16:12.378 ] 00:16:12.378 }' 00:16:12.378 20:11:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:12.378 20:11:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:12.378 20:11:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:12.637 20:11:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:12.637 20:11:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:13.587 20:11:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:13.587 20:11:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:13.587 20:11:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:13.587 20:11:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:13.587 20:11:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:13.587 20:11:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:13.587 20:11:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:13.587 20:11:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.587 20:11:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:13.587 20:11:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:13.587 20:11:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.587 20:11:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:13.587 "name": "raid_bdev1", 00:16:13.587 "uuid": "1a995962-3e61-42bf-a8d6-5a6d479691d3", 00:16:13.587 "strip_size_kb": 64, 00:16:13.587 "state": "online", 00:16:13.587 "raid_level": "raid5f", 00:16:13.587 "superblock": true, 00:16:13.587 "num_base_bdevs": 4, 00:16:13.587 "num_base_bdevs_discovered": 4, 00:16:13.587 "num_base_bdevs_operational": 4, 00:16:13.587 "process": { 00:16:13.587 "type": "rebuild", 00:16:13.587 "target": "spare", 00:16:13.587 "progress": { 00:16:13.587 "blocks": 107520, 00:16:13.587 "percent": 56 00:16:13.587 } 00:16:13.587 }, 00:16:13.587 "base_bdevs_list": [ 00:16:13.587 { 00:16:13.587 "name": "spare", 00:16:13.587 "uuid": "e565cbc7-26f0-5749-8cb4-512b29c63953", 00:16:13.587 "is_configured": true, 00:16:13.587 "data_offset": 2048, 00:16:13.587 "data_size": 63488 00:16:13.587 }, 00:16:13.587 { 00:16:13.587 "name": "BaseBdev2", 00:16:13.587 "uuid": "7b5b4a8d-2221-5172-9571-898c659f3824", 00:16:13.587 "is_configured": true, 00:16:13.587 "data_offset": 2048, 00:16:13.587 "data_size": 63488 00:16:13.587 }, 00:16:13.587 { 00:16:13.587 "name": "BaseBdev3", 00:16:13.587 "uuid": "2240c4a1-b724-5f9c-a577-511c5b404982", 00:16:13.587 "is_configured": true, 00:16:13.587 "data_offset": 2048, 00:16:13.587 "data_size": 63488 00:16:13.587 }, 00:16:13.587 { 00:16:13.587 "name": "BaseBdev4", 00:16:13.587 "uuid": "214c9c3a-b082-5d0e-aef0-a9deb3c3a7d8", 00:16:13.587 "is_configured": true, 00:16:13.587 "data_offset": 2048, 00:16:13.587 "data_size": 63488 00:16:13.587 } 00:16:13.587 ] 00:16:13.587 }' 00:16:13.587 20:11:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:13.587 20:11:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:13.587 20:11:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:13.587 20:11:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:13.587 20:11:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:14.971 20:11:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:14.971 20:11:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:14.971 20:11:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:14.971 20:11:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:14.971 20:11:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:14.971 20:11:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:14.971 20:11:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:14.971 20:11:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.971 20:11:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:14.971 20:11:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:14.971 20:11:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.971 20:11:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:14.971 "name": "raid_bdev1", 00:16:14.971 "uuid": "1a995962-3e61-42bf-a8d6-5a6d479691d3", 00:16:14.971 "strip_size_kb": 64, 00:16:14.971 "state": "online", 00:16:14.971 "raid_level": "raid5f", 00:16:14.971 "superblock": true, 00:16:14.971 "num_base_bdevs": 4, 00:16:14.971 "num_base_bdevs_discovered": 4, 00:16:14.971 "num_base_bdevs_operational": 4, 00:16:14.971 "process": { 00:16:14.971 "type": "rebuild", 00:16:14.971 "target": "spare", 00:16:14.971 "progress": { 00:16:14.971 "blocks": 130560, 00:16:14.971 "percent": 68 00:16:14.971 } 00:16:14.971 }, 00:16:14.971 "base_bdevs_list": [ 00:16:14.971 { 00:16:14.971 "name": "spare", 00:16:14.971 "uuid": "e565cbc7-26f0-5749-8cb4-512b29c63953", 00:16:14.971 "is_configured": true, 00:16:14.971 "data_offset": 2048, 00:16:14.971 "data_size": 63488 00:16:14.971 }, 00:16:14.971 { 00:16:14.971 "name": "BaseBdev2", 00:16:14.971 "uuid": "7b5b4a8d-2221-5172-9571-898c659f3824", 00:16:14.971 "is_configured": true, 00:16:14.971 "data_offset": 2048, 00:16:14.971 "data_size": 63488 00:16:14.971 }, 00:16:14.971 { 00:16:14.971 "name": "BaseBdev3", 00:16:14.971 "uuid": "2240c4a1-b724-5f9c-a577-511c5b404982", 00:16:14.971 "is_configured": true, 00:16:14.971 "data_offset": 2048, 00:16:14.971 "data_size": 63488 00:16:14.971 }, 00:16:14.971 { 00:16:14.971 "name": "BaseBdev4", 00:16:14.971 "uuid": "214c9c3a-b082-5d0e-aef0-a9deb3c3a7d8", 00:16:14.971 "is_configured": true, 00:16:14.971 "data_offset": 2048, 00:16:14.971 "data_size": 63488 00:16:14.971 } 00:16:14.971 ] 00:16:14.971 }' 00:16:14.971 20:11:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:14.971 20:11:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:14.971 20:11:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:14.971 20:11:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:14.971 20:11:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:15.909 20:11:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:15.909 20:11:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:15.909 20:11:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:15.909 20:11:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:15.909 20:11:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:15.909 20:11:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:15.909 20:11:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:15.909 20:11:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.909 20:11:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:15.909 20:11:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:15.909 20:11:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.909 20:11:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:15.909 "name": "raid_bdev1", 00:16:15.909 "uuid": "1a995962-3e61-42bf-a8d6-5a6d479691d3", 00:16:15.909 "strip_size_kb": 64, 00:16:15.909 "state": "online", 00:16:15.909 "raid_level": "raid5f", 00:16:15.909 "superblock": true, 00:16:15.909 "num_base_bdevs": 4, 00:16:15.909 "num_base_bdevs_discovered": 4, 00:16:15.909 "num_base_bdevs_operational": 4, 00:16:15.909 "process": { 00:16:15.909 "type": "rebuild", 00:16:15.909 "target": "spare", 00:16:15.909 "progress": { 00:16:15.909 "blocks": 151680, 00:16:15.909 "percent": 79 00:16:15.909 } 00:16:15.909 }, 00:16:15.909 "base_bdevs_list": [ 00:16:15.909 { 00:16:15.909 "name": "spare", 00:16:15.909 "uuid": "e565cbc7-26f0-5749-8cb4-512b29c63953", 00:16:15.909 "is_configured": true, 00:16:15.909 "data_offset": 2048, 00:16:15.909 "data_size": 63488 00:16:15.909 }, 00:16:15.909 { 00:16:15.909 "name": "BaseBdev2", 00:16:15.909 "uuid": "7b5b4a8d-2221-5172-9571-898c659f3824", 00:16:15.909 "is_configured": true, 00:16:15.909 "data_offset": 2048, 00:16:15.909 "data_size": 63488 00:16:15.909 }, 00:16:15.909 { 00:16:15.909 "name": "BaseBdev3", 00:16:15.909 "uuid": "2240c4a1-b724-5f9c-a577-511c5b404982", 00:16:15.909 "is_configured": true, 00:16:15.909 "data_offset": 2048, 00:16:15.909 "data_size": 63488 00:16:15.909 }, 00:16:15.909 { 00:16:15.909 "name": "BaseBdev4", 00:16:15.909 "uuid": "214c9c3a-b082-5d0e-aef0-a9deb3c3a7d8", 00:16:15.909 "is_configured": true, 00:16:15.909 "data_offset": 2048, 00:16:15.909 "data_size": 63488 00:16:15.909 } 00:16:15.909 ] 00:16:15.909 }' 00:16:15.909 20:11:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:15.909 20:11:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:15.909 20:11:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:15.909 20:11:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:15.909 20:11:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:17.293 20:11:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:17.293 20:11:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:17.293 20:11:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:17.293 20:11:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:17.293 20:11:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:17.293 20:11:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:17.293 20:11:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:17.293 20:11:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.293 20:11:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:17.293 20:11:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:17.293 20:11:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.293 20:11:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:17.293 "name": "raid_bdev1", 00:16:17.293 "uuid": "1a995962-3e61-42bf-a8d6-5a6d479691d3", 00:16:17.293 "strip_size_kb": 64, 00:16:17.293 "state": "online", 00:16:17.293 "raid_level": "raid5f", 00:16:17.293 "superblock": true, 00:16:17.293 "num_base_bdevs": 4, 00:16:17.293 "num_base_bdevs_discovered": 4, 00:16:17.293 "num_base_bdevs_operational": 4, 00:16:17.293 "process": { 00:16:17.293 "type": "rebuild", 00:16:17.293 "target": "spare", 00:16:17.293 "progress": { 00:16:17.293 "blocks": 174720, 00:16:17.293 "percent": 91 00:16:17.293 } 00:16:17.293 }, 00:16:17.293 "base_bdevs_list": [ 00:16:17.293 { 00:16:17.293 "name": "spare", 00:16:17.293 "uuid": "e565cbc7-26f0-5749-8cb4-512b29c63953", 00:16:17.293 "is_configured": true, 00:16:17.293 "data_offset": 2048, 00:16:17.293 "data_size": 63488 00:16:17.293 }, 00:16:17.293 { 00:16:17.293 "name": "BaseBdev2", 00:16:17.293 "uuid": "7b5b4a8d-2221-5172-9571-898c659f3824", 00:16:17.293 "is_configured": true, 00:16:17.293 "data_offset": 2048, 00:16:17.293 "data_size": 63488 00:16:17.293 }, 00:16:17.293 { 00:16:17.293 "name": "BaseBdev3", 00:16:17.293 "uuid": "2240c4a1-b724-5f9c-a577-511c5b404982", 00:16:17.293 "is_configured": true, 00:16:17.293 "data_offset": 2048, 00:16:17.293 "data_size": 63488 00:16:17.293 }, 00:16:17.293 { 00:16:17.293 "name": "BaseBdev4", 00:16:17.293 "uuid": "214c9c3a-b082-5d0e-aef0-a9deb3c3a7d8", 00:16:17.293 "is_configured": true, 00:16:17.293 "data_offset": 2048, 00:16:17.293 "data_size": 63488 00:16:17.293 } 00:16:17.293 ] 00:16:17.293 }' 00:16:17.293 20:11:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:17.293 20:11:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:17.293 20:11:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:17.293 20:11:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:17.293 20:11:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:17.863 [2024-12-08 20:11:49.719380] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:17.864 [2024-12-08 20:11:49.719472] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:17.864 [2024-12-08 20:11:49.719585] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:18.123 20:11:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:18.123 20:11:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:18.123 20:11:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:18.123 20:11:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:18.123 20:11:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:18.123 20:11:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:18.123 20:11:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:18.123 20:11:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:18.123 20:11:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.123 20:11:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.123 20:11:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.123 20:11:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:18.123 "name": "raid_bdev1", 00:16:18.123 "uuid": "1a995962-3e61-42bf-a8d6-5a6d479691d3", 00:16:18.123 "strip_size_kb": 64, 00:16:18.123 "state": "online", 00:16:18.123 "raid_level": "raid5f", 00:16:18.123 "superblock": true, 00:16:18.123 "num_base_bdevs": 4, 00:16:18.123 "num_base_bdevs_discovered": 4, 00:16:18.123 "num_base_bdevs_operational": 4, 00:16:18.123 "base_bdevs_list": [ 00:16:18.123 { 00:16:18.123 "name": "spare", 00:16:18.123 "uuid": "e565cbc7-26f0-5749-8cb4-512b29c63953", 00:16:18.123 "is_configured": true, 00:16:18.123 "data_offset": 2048, 00:16:18.123 "data_size": 63488 00:16:18.123 }, 00:16:18.123 { 00:16:18.123 "name": "BaseBdev2", 00:16:18.123 "uuid": "7b5b4a8d-2221-5172-9571-898c659f3824", 00:16:18.123 "is_configured": true, 00:16:18.123 "data_offset": 2048, 00:16:18.123 "data_size": 63488 00:16:18.123 }, 00:16:18.123 { 00:16:18.123 "name": "BaseBdev3", 00:16:18.123 "uuid": "2240c4a1-b724-5f9c-a577-511c5b404982", 00:16:18.123 "is_configured": true, 00:16:18.123 "data_offset": 2048, 00:16:18.123 "data_size": 63488 00:16:18.123 }, 00:16:18.123 { 00:16:18.123 "name": "BaseBdev4", 00:16:18.123 "uuid": "214c9c3a-b082-5d0e-aef0-a9deb3c3a7d8", 00:16:18.123 "is_configured": true, 00:16:18.123 "data_offset": 2048, 00:16:18.123 "data_size": 63488 00:16:18.123 } 00:16:18.123 ] 00:16:18.123 }' 00:16:18.123 20:11:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:18.123 20:11:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:18.123 20:11:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:18.384 20:11:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:18.384 20:11:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:16:18.384 20:11:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:18.384 20:11:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:18.384 20:11:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:18.384 20:11:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:18.384 20:11:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:18.384 20:11:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:18.384 20:11:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:18.384 20:11:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.384 20:11:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.384 20:11:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.384 20:11:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:18.384 "name": "raid_bdev1", 00:16:18.384 "uuid": "1a995962-3e61-42bf-a8d6-5a6d479691d3", 00:16:18.384 "strip_size_kb": 64, 00:16:18.384 "state": "online", 00:16:18.384 "raid_level": "raid5f", 00:16:18.384 "superblock": true, 00:16:18.384 "num_base_bdevs": 4, 00:16:18.384 "num_base_bdevs_discovered": 4, 00:16:18.384 "num_base_bdevs_operational": 4, 00:16:18.384 "base_bdevs_list": [ 00:16:18.384 { 00:16:18.384 "name": "spare", 00:16:18.384 "uuid": "e565cbc7-26f0-5749-8cb4-512b29c63953", 00:16:18.384 "is_configured": true, 00:16:18.384 "data_offset": 2048, 00:16:18.384 "data_size": 63488 00:16:18.384 }, 00:16:18.384 { 00:16:18.384 "name": "BaseBdev2", 00:16:18.384 "uuid": "7b5b4a8d-2221-5172-9571-898c659f3824", 00:16:18.384 "is_configured": true, 00:16:18.384 "data_offset": 2048, 00:16:18.384 "data_size": 63488 00:16:18.384 }, 00:16:18.384 { 00:16:18.384 "name": "BaseBdev3", 00:16:18.384 "uuid": "2240c4a1-b724-5f9c-a577-511c5b404982", 00:16:18.384 "is_configured": true, 00:16:18.384 "data_offset": 2048, 00:16:18.384 "data_size": 63488 00:16:18.384 }, 00:16:18.384 { 00:16:18.384 "name": "BaseBdev4", 00:16:18.384 "uuid": "214c9c3a-b082-5d0e-aef0-a9deb3c3a7d8", 00:16:18.384 "is_configured": true, 00:16:18.384 "data_offset": 2048, 00:16:18.384 "data_size": 63488 00:16:18.384 } 00:16:18.384 ] 00:16:18.384 }' 00:16:18.384 20:11:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:18.384 20:11:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:18.384 20:11:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:18.384 20:11:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:18.384 20:11:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:18.384 20:11:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:18.384 20:11:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:18.384 20:11:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:18.384 20:11:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:18.384 20:11:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:18.384 20:11:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:18.384 20:11:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:18.384 20:11:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:18.384 20:11:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:18.384 20:11:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:18.384 20:11:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.384 20:11:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.384 20:11:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:18.384 20:11:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.384 20:11:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:18.384 "name": "raid_bdev1", 00:16:18.384 "uuid": "1a995962-3e61-42bf-a8d6-5a6d479691d3", 00:16:18.384 "strip_size_kb": 64, 00:16:18.384 "state": "online", 00:16:18.384 "raid_level": "raid5f", 00:16:18.384 "superblock": true, 00:16:18.384 "num_base_bdevs": 4, 00:16:18.384 "num_base_bdevs_discovered": 4, 00:16:18.384 "num_base_bdevs_operational": 4, 00:16:18.384 "base_bdevs_list": [ 00:16:18.384 { 00:16:18.384 "name": "spare", 00:16:18.384 "uuid": "e565cbc7-26f0-5749-8cb4-512b29c63953", 00:16:18.384 "is_configured": true, 00:16:18.384 "data_offset": 2048, 00:16:18.384 "data_size": 63488 00:16:18.384 }, 00:16:18.384 { 00:16:18.384 "name": "BaseBdev2", 00:16:18.384 "uuid": "7b5b4a8d-2221-5172-9571-898c659f3824", 00:16:18.384 "is_configured": true, 00:16:18.384 "data_offset": 2048, 00:16:18.384 "data_size": 63488 00:16:18.384 }, 00:16:18.384 { 00:16:18.384 "name": "BaseBdev3", 00:16:18.384 "uuid": "2240c4a1-b724-5f9c-a577-511c5b404982", 00:16:18.384 "is_configured": true, 00:16:18.384 "data_offset": 2048, 00:16:18.384 "data_size": 63488 00:16:18.384 }, 00:16:18.384 { 00:16:18.384 "name": "BaseBdev4", 00:16:18.384 "uuid": "214c9c3a-b082-5d0e-aef0-a9deb3c3a7d8", 00:16:18.384 "is_configured": true, 00:16:18.384 "data_offset": 2048, 00:16:18.384 "data_size": 63488 00:16:18.384 } 00:16:18.384 ] 00:16:18.384 }' 00:16:18.384 20:11:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:18.384 20:11:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.955 20:11:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:18.955 20:11:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.955 20:11:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.955 [2024-12-08 20:11:50.699052] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:18.955 [2024-12-08 20:11:50.699120] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:18.955 [2024-12-08 20:11:50.699219] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:18.955 [2024-12-08 20:11:50.699323] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:18.955 [2024-12-08 20:11:50.699347] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:18.955 20:11:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.955 20:11:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:18.955 20:11:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.955 20:11:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:16:18.955 20:11:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.955 20:11:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.955 20:11:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:18.955 20:11:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:18.955 20:11:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:16:18.955 20:11:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:16:18.955 20:11:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:18.955 20:11:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:16:18.955 20:11:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:18.955 20:11:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:18.955 20:11:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:18.955 20:11:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:16:18.955 20:11:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:18.955 20:11:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:18.955 20:11:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:16:19.216 /dev/nbd0 00:16:19.216 20:11:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:19.216 20:11:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:19.216 20:11:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:19.216 20:11:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:16:19.216 20:11:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:19.216 20:11:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:19.216 20:11:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:19.216 20:11:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:16:19.216 20:11:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:19.216 20:11:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:19.216 20:11:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:19.216 1+0 records in 00:16:19.216 1+0 records out 00:16:19.217 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000525828 s, 7.8 MB/s 00:16:19.217 20:11:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:19.217 20:11:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:16:19.217 20:11:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:19.217 20:11:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:19.217 20:11:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:16:19.217 20:11:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:19.217 20:11:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:19.217 20:11:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:16:19.477 /dev/nbd1 00:16:19.477 20:11:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:19.477 20:11:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:19.477 20:11:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:16:19.477 20:11:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:16:19.477 20:11:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:19.477 20:11:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:19.477 20:11:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:16:19.477 20:11:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:16:19.477 20:11:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:19.477 20:11:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:19.477 20:11:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:19.477 1+0 records in 00:16:19.477 1+0 records out 00:16:19.477 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000240751 s, 17.0 MB/s 00:16:19.477 20:11:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:19.477 20:11:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:16:19.477 20:11:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:19.477 20:11:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:19.477 20:11:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:16:19.477 20:11:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:19.477 20:11:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:19.477 20:11:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:16:19.477 20:11:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:16:19.477 20:11:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:19.477 20:11:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:19.477 20:11:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:19.477 20:11:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:16:19.477 20:11:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:19.477 20:11:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:19.738 20:11:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:19.738 20:11:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:19.738 20:11:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:19.738 20:11:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:19.738 20:11:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:19.738 20:11:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:19.738 20:11:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:16:19.738 20:11:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:16:19.738 20:11:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:19.738 20:11:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:19.998 20:11:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:19.998 20:11:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:19.999 20:11:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:19.999 20:11:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:19.999 20:11:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:19.999 20:11:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:19.999 20:11:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:16:19.999 20:11:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:16:19.999 20:11:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:16:19.999 20:11:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:16:19.999 20:11:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.999 20:11:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:19.999 20:11:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.999 20:11:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:19.999 20:11:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.999 20:11:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:19.999 [2024-12-08 20:11:51.846848] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:19.999 [2024-12-08 20:11:51.846902] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:19.999 [2024-12-08 20:11:51.846926] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:16:19.999 [2024-12-08 20:11:51.846935] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:19.999 [2024-12-08 20:11:51.849237] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:19.999 [2024-12-08 20:11:51.849275] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:19.999 [2024-12-08 20:11:51.849369] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:19.999 [2024-12-08 20:11:51.849419] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:19.999 [2024-12-08 20:11:51.849556] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:19.999 [2024-12-08 20:11:51.849642] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:19.999 [2024-12-08 20:11:51.849757] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:19.999 spare 00:16:19.999 20:11:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.999 20:11:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:16:19.999 20:11:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.999 20:11:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:19.999 [2024-12-08 20:11:51.949680] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:16:19.999 [2024-12-08 20:11:51.949709] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:19.999 [2024-12-08 20:11:51.949995] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000491d0 00:16:19.999 [2024-12-08 20:11:51.957082] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:16:19.999 [2024-12-08 20:11:51.957101] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:16:19.999 [2024-12-08 20:11:51.957283] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:19.999 20:11:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.999 20:11:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:19.999 20:11:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:19.999 20:11:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:19.999 20:11:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:19.999 20:11:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:19.999 20:11:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:19.999 20:11:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:19.999 20:11:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:19.999 20:11:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:19.999 20:11:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:19.999 20:11:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:19.999 20:11:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:19.999 20:11:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.259 20:11:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.259 20:11:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.259 20:11:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:20.260 "name": "raid_bdev1", 00:16:20.260 "uuid": "1a995962-3e61-42bf-a8d6-5a6d479691d3", 00:16:20.260 "strip_size_kb": 64, 00:16:20.260 "state": "online", 00:16:20.260 "raid_level": "raid5f", 00:16:20.260 "superblock": true, 00:16:20.260 "num_base_bdevs": 4, 00:16:20.260 "num_base_bdevs_discovered": 4, 00:16:20.260 "num_base_bdevs_operational": 4, 00:16:20.260 "base_bdevs_list": [ 00:16:20.260 { 00:16:20.260 "name": "spare", 00:16:20.260 "uuid": "e565cbc7-26f0-5749-8cb4-512b29c63953", 00:16:20.260 "is_configured": true, 00:16:20.260 "data_offset": 2048, 00:16:20.260 "data_size": 63488 00:16:20.260 }, 00:16:20.260 { 00:16:20.260 "name": "BaseBdev2", 00:16:20.260 "uuid": "7b5b4a8d-2221-5172-9571-898c659f3824", 00:16:20.260 "is_configured": true, 00:16:20.260 "data_offset": 2048, 00:16:20.260 "data_size": 63488 00:16:20.260 }, 00:16:20.260 { 00:16:20.260 "name": "BaseBdev3", 00:16:20.260 "uuid": "2240c4a1-b724-5f9c-a577-511c5b404982", 00:16:20.260 "is_configured": true, 00:16:20.260 "data_offset": 2048, 00:16:20.260 "data_size": 63488 00:16:20.260 }, 00:16:20.260 { 00:16:20.260 "name": "BaseBdev4", 00:16:20.260 "uuid": "214c9c3a-b082-5d0e-aef0-a9deb3c3a7d8", 00:16:20.260 "is_configured": true, 00:16:20.260 "data_offset": 2048, 00:16:20.260 "data_size": 63488 00:16:20.260 } 00:16:20.260 ] 00:16:20.260 }' 00:16:20.260 20:11:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:20.260 20:11:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.520 20:11:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:20.520 20:11:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:20.520 20:11:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:20.520 20:11:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:20.520 20:11:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:20.520 20:11:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:20.520 20:11:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:20.520 20:11:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.520 20:11:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.520 20:11:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.520 20:11:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:20.520 "name": "raid_bdev1", 00:16:20.520 "uuid": "1a995962-3e61-42bf-a8d6-5a6d479691d3", 00:16:20.520 "strip_size_kb": 64, 00:16:20.520 "state": "online", 00:16:20.520 "raid_level": "raid5f", 00:16:20.520 "superblock": true, 00:16:20.520 "num_base_bdevs": 4, 00:16:20.520 "num_base_bdevs_discovered": 4, 00:16:20.520 "num_base_bdevs_operational": 4, 00:16:20.520 "base_bdevs_list": [ 00:16:20.520 { 00:16:20.520 "name": "spare", 00:16:20.520 "uuid": "e565cbc7-26f0-5749-8cb4-512b29c63953", 00:16:20.520 "is_configured": true, 00:16:20.520 "data_offset": 2048, 00:16:20.520 "data_size": 63488 00:16:20.520 }, 00:16:20.520 { 00:16:20.520 "name": "BaseBdev2", 00:16:20.520 "uuid": "7b5b4a8d-2221-5172-9571-898c659f3824", 00:16:20.520 "is_configured": true, 00:16:20.520 "data_offset": 2048, 00:16:20.520 "data_size": 63488 00:16:20.520 }, 00:16:20.520 { 00:16:20.520 "name": "BaseBdev3", 00:16:20.520 "uuid": "2240c4a1-b724-5f9c-a577-511c5b404982", 00:16:20.520 "is_configured": true, 00:16:20.520 "data_offset": 2048, 00:16:20.520 "data_size": 63488 00:16:20.520 }, 00:16:20.520 { 00:16:20.520 "name": "BaseBdev4", 00:16:20.520 "uuid": "214c9c3a-b082-5d0e-aef0-a9deb3c3a7d8", 00:16:20.520 "is_configured": true, 00:16:20.520 "data_offset": 2048, 00:16:20.520 "data_size": 63488 00:16:20.520 } 00:16:20.520 ] 00:16:20.520 }' 00:16:20.520 20:11:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:20.520 20:11:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:20.520 20:11:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:20.520 20:11:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:20.780 20:11:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:20.780 20:11:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.780 20:11:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:16:20.780 20:11:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.780 20:11:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.780 20:11:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:16:20.780 20:11:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:20.780 20:11:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.780 20:11:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.780 [2024-12-08 20:11:52.552360] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:20.780 20:11:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.780 20:11:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:20.780 20:11:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:20.780 20:11:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:20.780 20:11:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:20.780 20:11:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:20.781 20:11:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:20.781 20:11:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:20.781 20:11:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:20.781 20:11:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:20.781 20:11:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:20.781 20:11:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:20.781 20:11:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:20.781 20:11:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.781 20:11:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.781 20:11:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.781 20:11:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:20.781 "name": "raid_bdev1", 00:16:20.781 "uuid": "1a995962-3e61-42bf-a8d6-5a6d479691d3", 00:16:20.781 "strip_size_kb": 64, 00:16:20.781 "state": "online", 00:16:20.781 "raid_level": "raid5f", 00:16:20.781 "superblock": true, 00:16:20.781 "num_base_bdevs": 4, 00:16:20.781 "num_base_bdevs_discovered": 3, 00:16:20.781 "num_base_bdevs_operational": 3, 00:16:20.781 "base_bdevs_list": [ 00:16:20.781 { 00:16:20.781 "name": null, 00:16:20.781 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:20.781 "is_configured": false, 00:16:20.781 "data_offset": 0, 00:16:20.781 "data_size": 63488 00:16:20.781 }, 00:16:20.781 { 00:16:20.781 "name": "BaseBdev2", 00:16:20.781 "uuid": "7b5b4a8d-2221-5172-9571-898c659f3824", 00:16:20.781 "is_configured": true, 00:16:20.781 "data_offset": 2048, 00:16:20.781 "data_size": 63488 00:16:20.781 }, 00:16:20.781 { 00:16:20.781 "name": "BaseBdev3", 00:16:20.781 "uuid": "2240c4a1-b724-5f9c-a577-511c5b404982", 00:16:20.781 "is_configured": true, 00:16:20.781 "data_offset": 2048, 00:16:20.781 "data_size": 63488 00:16:20.781 }, 00:16:20.781 { 00:16:20.781 "name": "BaseBdev4", 00:16:20.781 "uuid": "214c9c3a-b082-5d0e-aef0-a9deb3c3a7d8", 00:16:20.781 "is_configured": true, 00:16:20.781 "data_offset": 2048, 00:16:20.781 "data_size": 63488 00:16:20.781 } 00:16:20.781 ] 00:16:20.781 }' 00:16:20.781 20:11:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:20.781 20:11:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:21.041 20:11:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:21.041 20:11:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.041 20:11:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:21.041 [2024-12-08 20:11:52.967650] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:21.041 [2024-12-08 20:11:52.967913] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:21.041 [2024-12-08 20:11:52.968026] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:21.041 [2024-12-08 20:11:52.968106] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:21.041 [2024-12-08 20:11:52.983257] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000492a0 00:16:21.041 20:11:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.041 20:11:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:16:21.041 [2024-12-08 20:11:52.992008] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:22.422 20:11:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:22.422 20:11:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:22.422 20:11:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:22.422 20:11:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:22.422 20:11:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:22.422 20:11:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.422 20:11:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:22.422 20:11:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.422 20:11:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.422 20:11:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.422 20:11:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:22.422 "name": "raid_bdev1", 00:16:22.423 "uuid": "1a995962-3e61-42bf-a8d6-5a6d479691d3", 00:16:22.423 "strip_size_kb": 64, 00:16:22.423 "state": "online", 00:16:22.423 "raid_level": "raid5f", 00:16:22.423 "superblock": true, 00:16:22.423 "num_base_bdevs": 4, 00:16:22.423 "num_base_bdevs_discovered": 4, 00:16:22.423 "num_base_bdevs_operational": 4, 00:16:22.423 "process": { 00:16:22.423 "type": "rebuild", 00:16:22.423 "target": "spare", 00:16:22.423 "progress": { 00:16:22.423 "blocks": 19200, 00:16:22.423 "percent": 10 00:16:22.423 } 00:16:22.423 }, 00:16:22.423 "base_bdevs_list": [ 00:16:22.423 { 00:16:22.423 "name": "spare", 00:16:22.423 "uuid": "e565cbc7-26f0-5749-8cb4-512b29c63953", 00:16:22.423 "is_configured": true, 00:16:22.423 "data_offset": 2048, 00:16:22.423 "data_size": 63488 00:16:22.423 }, 00:16:22.423 { 00:16:22.423 "name": "BaseBdev2", 00:16:22.423 "uuid": "7b5b4a8d-2221-5172-9571-898c659f3824", 00:16:22.423 "is_configured": true, 00:16:22.423 "data_offset": 2048, 00:16:22.423 "data_size": 63488 00:16:22.423 }, 00:16:22.423 { 00:16:22.423 "name": "BaseBdev3", 00:16:22.423 "uuid": "2240c4a1-b724-5f9c-a577-511c5b404982", 00:16:22.423 "is_configured": true, 00:16:22.423 "data_offset": 2048, 00:16:22.423 "data_size": 63488 00:16:22.423 }, 00:16:22.423 { 00:16:22.423 "name": "BaseBdev4", 00:16:22.423 "uuid": "214c9c3a-b082-5d0e-aef0-a9deb3c3a7d8", 00:16:22.423 "is_configured": true, 00:16:22.423 "data_offset": 2048, 00:16:22.423 "data_size": 63488 00:16:22.423 } 00:16:22.423 ] 00:16:22.423 }' 00:16:22.423 20:11:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:22.423 20:11:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:22.423 20:11:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:22.423 20:11:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:22.423 20:11:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:16:22.423 20:11:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.423 20:11:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.423 [2024-12-08 20:11:54.151357] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:22.423 [2024-12-08 20:11:54.197855] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:22.423 [2024-12-08 20:11:54.197980] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:22.423 [2024-12-08 20:11:54.198015] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:22.423 [2024-12-08 20:11:54.198025] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:22.423 20:11:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.423 20:11:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:22.423 20:11:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:22.423 20:11:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:22.423 20:11:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:22.423 20:11:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:22.423 20:11:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:22.423 20:11:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:22.423 20:11:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:22.423 20:11:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:22.423 20:11:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:22.423 20:11:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.423 20:11:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:22.423 20:11:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.423 20:11:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.423 20:11:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.423 20:11:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:22.423 "name": "raid_bdev1", 00:16:22.423 "uuid": "1a995962-3e61-42bf-a8d6-5a6d479691d3", 00:16:22.423 "strip_size_kb": 64, 00:16:22.423 "state": "online", 00:16:22.423 "raid_level": "raid5f", 00:16:22.423 "superblock": true, 00:16:22.423 "num_base_bdevs": 4, 00:16:22.423 "num_base_bdevs_discovered": 3, 00:16:22.423 "num_base_bdevs_operational": 3, 00:16:22.423 "base_bdevs_list": [ 00:16:22.423 { 00:16:22.423 "name": null, 00:16:22.423 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:22.423 "is_configured": false, 00:16:22.423 "data_offset": 0, 00:16:22.423 "data_size": 63488 00:16:22.423 }, 00:16:22.423 { 00:16:22.423 "name": "BaseBdev2", 00:16:22.423 "uuid": "7b5b4a8d-2221-5172-9571-898c659f3824", 00:16:22.423 "is_configured": true, 00:16:22.423 "data_offset": 2048, 00:16:22.423 "data_size": 63488 00:16:22.423 }, 00:16:22.423 { 00:16:22.423 "name": "BaseBdev3", 00:16:22.423 "uuid": "2240c4a1-b724-5f9c-a577-511c5b404982", 00:16:22.423 "is_configured": true, 00:16:22.423 "data_offset": 2048, 00:16:22.423 "data_size": 63488 00:16:22.423 }, 00:16:22.423 { 00:16:22.423 "name": "BaseBdev4", 00:16:22.423 "uuid": "214c9c3a-b082-5d0e-aef0-a9deb3c3a7d8", 00:16:22.423 "is_configured": true, 00:16:22.423 "data_offset": 2048, 00:16:22.423 "data_size": 63488 00:16:22.423 } 00:16:22.423 ] 00:16:22.423 }' 00:16:22.423 20:11:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:22.423 20:11:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.683 20:11:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:22.683 20:11:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.683 20:11:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.683 [2024-12-08 20:11:54.594821] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:22.683 [2024-12-08 20:11:54.594923] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:22.683 [2024-12-08 20:11:54.594976] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:16:22.683 [2024-12-08 20:11:54.595009] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:22.683 [2024-12-08 20:11:54.595649] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:22.683 [2024-12-08 20:11:54.595780] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:22.683 [2024-12-08 20:11:54.595937] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:22.683 [2024-12-08 20:11:54.596020] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:22.683 [2024-12-08 20:11:54.596085] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:22.683 [2024-12-08 20:11:54.596192] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:22.683 [2024-12-08 20:11:54.611532] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000049370 00:16:22.683 spare 00:16:22.683 20:11:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.683 20:11:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:16:22.683 [2024-12-08 20:11:54.621406] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:23.653 20:11:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:23.653 20:11:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:23.653 20:11:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:23.653 20:11:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:23.653 20:11:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:23.653 20:11:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:23.653 20:11:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:23.653 20:11:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.653 20:11:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:23.913 20:11:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.913 20:11:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:23.913 "name": "raid_bdev1", 00:16:23.913 "uuid": "1a995962-3e61-42bf-a8d6-5a6d479691d3", 00:16:23.913 "strip_size_kb": 64, 00:16:23.913 "state": "online", 00:16:23.913 "raid_level": "raid5f", 00:16:23.913 "superblock": true, 00:16:23.913 "num_base_bdevs": 4, 00:16:23.913 "num_base_bdevs_discovered": 4, 00:16:23.913 "num_base_bdevs_operational": 4, 00:16:23.913 "process": { 00:16:23.913 "type": "rebuild", 00:16:23.913 "target": "spare", 00:16:23.913 "progress": { 00:16:23.913 "blocks": 19200, 00:16:23.913 "percent": 10 00:16:23.913 } 00:16:23.913 }, 00:16:23.913 "base_bdevs_list": [ 00:16:23.913 { 00:16:23.913 "name": "spare", 00:16:23.913 "uuid": "e565cbc7-26f0-5749-8cb4-512b29c63953", 00:16:23.913 "is_configured": true, 00:16:23.913 "data_offset": 2048, 00:16:23.913 "data_size": 63488 00:16:23.913 }, 00:16:23.913 { 00:16:23.913 "name": "BaseBdev2", 00:16:23.913 "uuid": "7b5b4a8d-2221-5172-9571-898c659f3824", 00:16:23.913 "is_configured": true, 00:16:23.913 "data_offset": 2048, 00:16:23.913 "data_size": 63488 00:16:23.913 }, 00:16:23.913 { 00:16:23.913 "name": "BaseBdev3", 00:16:23.913 "uuid": "2240c4a1-b724-5f9c-a577-511c5b404982", 00:16:23.913 "is_configured": true, 00:16:23.913 "data_offset": 2048, 00:16:23.913 "data_size": 63488 00:16:23.913 }, 00:16:23.913 { 00:16:23.913 "name": "BaseBdev4", 00:16:23.913 "uuid": "214c9c3a-b082-5d0e-aef0-a9deb3c3a7d8", 00:16:23.913 "is_configured": true, 00:16:23.913 "data_offset": 2048, 00:16:23.913 "data_size": 63488 00:16:23.913 } 00:16:23.913 ] 00:16:23.913 }' 00:16:23.913 20:11:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:23.913 20:11:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:23.913 20:11:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:23.913 20:11:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:23.913 20:11:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:16:23.913 20:11:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.913 20:11:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:23.913 [2024-12-08 20:11:55.760342] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:23.913 [2024-12-08 20:11:55.827431] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:23.913 [2024-12-08 20:11:55.827496] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:23.913 [2024-12-08 20:11:55.827516] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:23.913 [2024-12-08 20:11:55.827523] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:23.913 20:11:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.913 20:11:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:23.913 20:11:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:23.913 20:11:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:23.913 20:11:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:23.913 20:11:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:23.913 20:11:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:23.913 20:11:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:23.913 20:11:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:23.913 20:11:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:23.913 20:11:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:23.913 20:11:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:23.913 20:11:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:23.913 20:11:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.913 20:11:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:23.913 20:11:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.173 20:11:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:24.173 "name": "raid_bdev1", 00:16:24.173 "uuid": "1a995962-3e61-42bf-a8d6-5a6d479691d3", 00:16:24.173 "strip_size_kb": 64, 00:16:24.173 "state": "online", 00:16:24.173 "raid_level": "raid5f", 00:16:24.173 "superblock": true, 00:16:24.173 "num_base_bdevs": 4, 00:16:24.173 "num_base_bdevs_discovered": 3, 00:16:24.173 "num_base_bdevs_operational": 3, 00:16:24.173 "base_bdevs_list": [ 00:16:24.173 { 00:16:24.173 "name": null, 00:16:24.173 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:24.173 "is_configured": false, 00:16:24.173 "data_offset": 0, 00:16:24.173 "data_size": 63488 00:16:24.173 }, 00:16:24.173 { 00:16:24.173 "name": "BaseBdev2", 00:16:24.173 "uuid": "7b5b4a8d-2221-5172-9571-898c659f3824", 00:16:24.173 "is_configured": true, 00:16:24.173 "data_offset": 2048, 00:16:24.173 "data_size": 63488 00:16:24.173 }, 00:16:24.173 { 00:16:24.173 "name": "BaseBdev3", 00:16:24.173 "uuid": "2240c4a1-b724-5f9c-a577-511c5b404982", 00:16:24.173 "is_configured": true, 00:16:24.173 "data_offset": 2048, 00:16:24.173 "data_size": 63488 00:16:24.173 }, 00:16:24.173 { 00:16:24.173 "name": "BaseBdev4", 00:16:24.173 "uuid": "214c9c3a-b082-5d0e-aef0-a9deb3c3a7d8", 00:16:24.173 "is_configured": true, 00:16:24.173 "data_offset": 2048, 00:16:24.173 "data_size": 63488 00:16:24.173 } 00:16:24.173 ] 00:16:24.173 }' 00:16:24.173 20:11:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:24.173 20:11:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:24.433 20:11:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:24.433 20:11:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:24.433 20:11:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:24.433 20:11:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:24.433 20:11:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:24.433 20:11:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:24.433 20:11:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.433 20:11:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:24.433 20:11:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:24.433 20:11:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.433 20:11:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:24.433 "name": "raid_bdev1", 00:16:24.433 "uuid": "1a995962-3e61-42bf-a8d6-5a6d479691d3", 00:16:24.433 "strip_size_kb": 64, 00:16:24.433 "state": "online", 00:16:24.433 "raid_level": "raid5f", 00:16:24.433 "superblock": true, 00:16:24.433 "num_base_bdevs": 4, 00:16:24.433 "num_base_bdevs_discovered": 3, 00:16:24.433 "num_base_bdevs_operational": 3, 00:16:24.433 "base_bdevs_list": [ 00:16:24.433 { 00:16:24.433 "name": null, 00:16:24.433 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:24.433 "is_configured": false, 00:16:24.433 "data_offset": 0, 00:16:24.433 "data_size": 63488 00:16:24.433 }, 00:16:24.433 { 00:16:24.433 "name": "BaseBdev2", 00:16:24.433 "uuid": "7b5b4a8d-2221-5172-9571-898c659f3824", 00:16:24.433 "is_configured": true, 00:16:24.433 "data_offset": 2048, 00:16:24.433 "data_size": 63488 00:16:24.433 }, 00:16:24.433 { 00:16:24.433 "name": "BaseBdev3", 00:16:24.433 "uuid": "2240c4a1-b724-5f9c-a577-511c5b404982", 00:16:24.433 "is_configured": true, 00:16:24.433 "data_offset": 2048, 00:16:24.433 "data_size": 63488 00:16:24.433 }, 00:16:24.433 { 00:16:24.433 "name": "BaseBdev4", 00:16:24.433 "uuid": "214c9c3a-b082-5d0e-aef0-a9deb3c3a7d8", 00:16:24.433 "is_configured": true, 00:16:24.433 "data_offset": 2048, 00:16:24.433 "data_size": 63488 00:16:24.433 } 00:16:24.433 ] 00:16:24.433 }' 00:16:24.433 20:11:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:24.433 20:11:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:24.433 20:11:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:24.693 20:11:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:24.693 20:11:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:16:24.693 20:11:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.693 20:11:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:24.693 20:11:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.693 20:11:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:24.693 20:11:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.693 20:11:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:24.693 [2024-12-08 20:11:56.435972] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:24.693 [2024-12-08 20:11:56.436021] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:24.693 [2024-12-08 20:11:56.436042] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:16:24.693 [2024-12-08 20:11:56.436052] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:24.693 [2024-12-08 20:11:56.436511] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:24.693 [2024-12-08 20:11:56.436530] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:24.693 [2024-12-08 20:11:56.436609] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:16:24.693 [2024-12-08 20:11:56.436624] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:24.693 [2024-12-08 20:11:56.436636] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:24.693 [2024-12-08 20:11:56.436646] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:16:24.693 BaseBdev1 00:16:24.693 20:11:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.693 20:11:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:16:25.632 20:11:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:25.632 20:11:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:25.632 20:11:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:25.632 20:11:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:25.632 20:11:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:25.632 20:11:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:25.632 20:11:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:25.632 20:11:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:25.632 20:11:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:25.632 20:11:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:25.632 20:11:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.632 20:11:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:25.632 20:11:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.632 20:11:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.632 20:11:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.632 20:11:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:25.632 "name": "raid_bdev1", 00:16:25.632 "uuid": "1a995962-3e61-42bf-a8d6-5a6d479691d3", 00:16:25.632 "strip_size_kb": 64, 00:16:25.632 "state": "online", 00:16:25.632 "raid_level": "raid5f", 00:16:25.632 "superblock": true, 00:16:25.632 "num_base_bdevs": 4, 00:16:25.632 "num_base_bdevs_discovered": 3, 00:16:25.632 "num_base_bdevs_operational": 3, 00:16:25.632 "base_bdevs_list": [ 00:16:25.632 { 00:16:25.632 "name": null, 00:16:25.632 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:25.632 "is_configured": false, 00:16:25.632 "data_offset": 0, 00:16:25.632 "data_size": 63488 00:16:25.632 }, 00:16:25.632 { 00:16:25.632 "name": "BaseBdev2", 00:16:25.632 "uuid": "7b5b4a8d-2221-5172-9571-898c659f3824", 00:16:25.632 "is_configured": true, 00:16:25.632 "data_offset": 2048, 00:16:25.632 "data_size": 63488 00:16:25.632 }, 00:16:25.632 { 00:16:25.632 "name": "BaseBdev3", 00:16:25.632 "uuid": "2240c4a1-b724-5f9c-a577-511c5b404982", 00:16:25.632 "is_configured": true, 00:16:25.632 "data_offset": 2048, 00:16:25.632 "data_size": 63488 00:16:25.632 }, 00:16:25.632 { 00:16:25.632 "name": "BaseBdev4", 00:16:25.632 "uuid": "214c9c3a-b082-5d0e-aef0-a9deb3c3a7d8", 00:16:25.632 "is_configured": true, 00:16:25.632 "data_offset": 2048, 00:16:25.632 "data_size": 63488 00:16:25.632 } 00:16:25.632 ] 00:16:25.632 }' 00:16:25.632 20:11:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:25.632 20:11:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.892 20:11:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:25.892 20:11:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:25.892 20:11:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:25.892 20:11:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:25.892 20:11:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:25.892 20:11:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.892 20:11:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:25.892 20:11:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.892 20:11:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.153 20:11:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.153 20:11:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:26.153 "name": "raid_bdev1", 00:16:26.153 "uuid": "1a995962-3e61-42bf-a8d6-5a6d479691d3", 00:16:26.153 "strip_size_kb": 64, 00:16:26.153 "state": "online", 00:16:26.153 "raid_level": "raid5f", 00:16:26.153 "superblock": true, 00:16:26.153 "num_base_bdevs": 4, 00:16:26.153 "num_base_bdevs_discovered": 3, 00:16:26.153 "num_base_bdevs_operational": 3, 00:16:26.153 "base_bdevs_list": [ 00:16:26.153 { 00:16:26.153 "name": null, 00:16:26.153 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:26.153 "is_configured": false, 00:16:26.153 "data_offset": 0, 00:16:26.153 "data_size": 63488 00:16:26.153 }, 00:16:26.153 { 00:16:26.153 "name": "BaseBdev2", 00:16:26.153 "uuid": "7b5b4a8d-2221-5172-9571-898c659f3824", 00:16:26.153 "is_configured": true, 00:16:26.153 "data_offset": 2048, 00:16:26.153 "data_size": 63488 00:16:26.153 }, 00:16:26.153 { 00:16:26.153 "name": "BaseBdev3", 00:16:26.153 "uuid": "2240c4a1-b724-5f9c-a577-511c5b404982", 00:16:26.153 "is_configured": true, 00:16:26.153 "data_offset": 2048, 00:16:26.153 "data_size": 63488 00:16:26.153 }, 00:16:26.153 { 00:16:26.153 "name": "BaseBdev4", 00:16:26.153 "uuid": "214c9c3a-b082-5d0e-aef0-a9deb3c3a7d8", 00:16:26.153 "is_configured": true, 00:16:26.153 "data_offset": 2048, 00:16:26.153 "data_size": 63488 00:16:26.153 } 00:16:26.153 ] 00:16:26.153 }' 00:16:26.153 20:11:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:26.153 20:11:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:26.153 20:11:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:26.153 20:11:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:26.153 20:11:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:26.153 20:11:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:16:26.153 20:11:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:26.153 20:11:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:26.153 20:11:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:26.153 20:11:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:26.153 20:11:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:26.153 20:11:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:26.153 20:11:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.153 20:11:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.153 [2024-12-08 20:11:57.969626] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:26.153 [2024-12-08 20:11:57.969869] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:26.153 [2024-12-08 20:11:57.969937] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:26.153 request: 00:16:26.153 { 00:16:26.153 "base_bdev": "BaseBdev1", 00:16:26.153 "raid_bdev": "raid_bdev1", 00:16:26.153 "method": "bdev_raid_add_base_bdev", 00:16:26.153 "req_id": 1 00:16:26.153 } 00:16:26.153 Got JSON-RPC error response 00:16:26.153 response: 00:16:26.153 { 00:16:26.153 "code": -22, 00:16:26.153 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:16:26.153 } 00:16:26.153 20:11:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:26.153 20:11:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:16:26.153 20:11:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:26.153 20:11:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:26.153 20:11:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:26.153 20:11:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:16:27.093 20:11:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:27.093 20:11:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:27.093 20:11:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:27.093 20:11:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:27.093 20:11:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:27.093 20:11:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:27.093 20:11:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:27.093 20:11:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:27.093 20:11:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:27.093 20:11:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:27.093 20:11:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:27.093 20:11:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:27.093 20:11:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.093 20:11:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.093 20:11:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.093 20:11:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:27.093 "name": "raid_bdev1", 00:16:27.093 "uuid": "1a995962-3e61-42bf-a8d6-5a6d479691d3", 00:16:27.093 "strip_size_kb": 64, 00:16:27.093 "state": "online", 00:16:27.093 "raid_level": "raid5f", 00:16:27.093 "superblock": true, 00:16:27.093 "num_base_bdevs": 4, 00:16:27.093 "num_base_bdevs_discovered": 3, 00:16:27.093 "num_base_bdevs_operational": 3, 00:16:27.093 "base_bdevs_list": [ 00:16:27.093 { 00:16:27.093 "name": null, 00:16:27.093 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:27.093 "is_configured": false, 00:16:27.093 "data_offset": 0, 00:16:27.093 "data_size": 63488 00:16:27.093 }, 00:16:27.093 { 00:16:27.093 "name": "BaseBdev2", 00:16:27.093 "uuid": "7b5b4a8d-2221-5172-9571-898c659f3824", 00:16:27.093 "is_configured": true, 00:16:27.093 "data_offset": 2048, 00:16:27.093 "data_size": 63488 00:16:27.093 }, 00:16:27.093 { 00:16:27.093 "name": "BaseBdev3", 00:16:27.093 "uuid": "2240c4a1-b724-5f9c-a577-511c5b404982", 00:16:27.093 "is_configured": true, 00:16:27.093 "data_offset": 2048, 00:16:27.093 "data_size": 63488 00:16:27.093 }, 00:16:27.093 { 00:16:27.093 "name": "BaseBdev4", 00:16:27.093 "uuid": "214c9c3a-b082-5d0e-aef0-a9deb3c3a7d8", 00:16:27.093 "is_configured": true, 00:16:27.093 "data_offset": 2048, 00:16:27.093 "data_size": 63488 00:16:27.093 } 00:16:27.093 ] 00:16:27.093 }' 00:16:27.093 20:11:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:27.093 20:11:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.663 20:11:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:27.663 20:11:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:27.663 20:11:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:27.663 20:11:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:27.663 20:11:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:27.663 20:11:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:27.663 20:11:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:27.663 20:11:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.663 20:11:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.663 20:11:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.663 20:11:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:27.663 "name": "raid_bdev1", 00:16:27.663 "uuid": "1a995962-3e61-42bf-a8d6-5a6d479691d3", 00:16:27.663 "strip_size_kb": 64, 00:16:27.663 "state": "online", 00:16:27.663 "raid_level": "raid5f", 00:16:27.663 "superblock": true, 00:16:27.663 "num_base_bdevs": 4, 00:16:27.663 "num_base_bdevs_discovered": 3, 00:16:27.663 "num_base_bdevs_operational": 3, 00:16:27.663 "base_bdevs_list": [ 00:16:27.663 { 00:16:27.663 "name": null, 00:16:27.663 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:27.663 "is_configured": false, 00:16:27.663 "data_offset": 0, 00:16:27.663 "data_size": 63488 00:16:27.663 }, 00:16:27.663 { 00:16:27.663 "name": "BaseBdev2", 00:16:27.663 "uuid": "7b5b4a8d-2221-5172-9571-898c659f3824", 00:16:27.663 "is_configured": true, 00:16:27.663 "data_offset": 2048, 00:16:27.663 "data_size": 63488 00:16:27.663 }, 00:16:27.663 { 00:16:27.663 "name": "BaseBdev3", 00:16:27.663 "uuid": "2240c4a1-b724-5f9c-a577-511c5b404982", 00:16:27.663 "is_configured": true, 00:16:27.663 "data_offset": 2048, 00:16:27.663 "data_size": 63488 00:16:27.663 }, 00:16:27.663 { 00:16:27.663 "name": "BaseBdev4", 00:16:27.663 "uuid": "214c9c3a-b082-5d0e-aef0-a9deb3c3a7d8", 00:16:27.663 "is_configured": true, 00:16:27.663 "data_offset": 2048, 00:16:27.663 "data_size": 63488 00:16:27.663 } 00:16:27.663 ] 00:16:27.663 }' 00:16:27.663 20:11:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:27.663 20:11:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:27.663 20:11:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:27.663 20:11:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:27.663 20:11:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 84765 00:16:27.663 20:11:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 84765 ']' 00:16:27.663 20:11:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 84765 00:16:27.663 20:11:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:16:27.663 20:11:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:27.663 20:11:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84765 00:16:27.663 20:11:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:27.663 20:11:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:27.663 20:11:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84765' 00:16:27.663 killing process with pid 84765 00:16:27.663 20:11:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 84765 00:16:27.663 Received shutdown signal, test time was about 60.000000 seconds 00:16:27.663 00:16:27.663 Latency(us) 00:16:27.663 [2024-12-08T20:11:59.641Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:27.663 [2024-12-08T20:11:59.641Z] =================================================================================================================== 00:16:27.663 [2024-12-08T20:11:59.641Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:27.663 [2024-12-08 20:11:59.614130] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:27.663 [2024-12-08 20:11:59.614251] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:27.663 20:11:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 84765 00:16:27.663 [2024-12-08 20:11:59.614366] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:27.663 [2024-12-08 20:11:59.614386] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:16:28.233 [2024-12-08 20:12:00.067704] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:29.176 20:12:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:16:29.176 00:16:29.176 real 0m26.338s 00:16:29.176 user 0m32.899s 00:16:29.176 sys 0m2.799s 00:16:29.176 20:12:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:29.176 20:12:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.176 ************************************ 00:16:29.176 END TEST raid5f_rebuild_test_sb 00:16:29.176 ************************************ 00:16:29.436 20:12:01 bdev_raid -- bdev/bdev_raid.sh@995 -- # base_blocklen=4096 00:16:29.436 20:12:01 bdev_raid -- bdev/bdev_raid.sh@997 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:16:29.436 20:12:01 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:16:29.436 20:12:01 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:29.436 20:12:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:29.436 ************************************ 00:16:29.436 START TEST raid_state_function_test_sb_4k 00:16:29.436 ************************************ 00:16:29.436 20:12:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:16:29.436 20:12:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:16:29.436 20:12:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:16:29.436 20:12:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:16:29.436 20:12:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:16:29.436 20:12:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:16:29.436 20:12:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:29.436 20:12:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:16:29.436 20:12:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:29.436 20:12:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:29.436 20:12:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:16:29.436 20:12:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:29.436 20:12:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:29.436 20:12:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:29.436 20:12:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:16:29.436 20:12:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:16:29.436 20:12:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # local strip_size 00:16:29.436 20:12:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:16:29.436 20:12:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:16:29.436 20:12:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:16:29.436 20:12:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:16:29.436 20:12:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:16:29.436 20:12:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:16:29.436 20:12:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@229 -- # raid_pid=85570 00:16:29.436 20:12:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:29.436 Process raid pid: 85570 00:16:29.436 20:12:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 85570' 00:16:29.436 20:12:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@231 -- # waitforlisten 85570 00:16:29.436 20:12:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@835 -- # '[' -z 85570 ']' 00:16:29.436 20:12:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:29.436 20:12:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:29.436 20:12:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:29.436 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:29.436 20:12:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:29.436 20:12:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:29.436 [2024-12-08 20:12:01.289055] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:16:29.436 [2024-12-08 20:12:01.289241] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:29.696 [2024-12-08 20:12:01.462539] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:29.696 [2024-12-08 20:12:01.568458] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:29.955 [2024-12-08 20:12:01.756377] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:29.955 [2024-12-08 20:12:01.756414] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:30.215 20:12:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:30.215 20:12:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@868 -- # return 0 00:16:30.215 20:12:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:16:30.215 20:12:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.215 20:12:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:30.215 [2024-12-08 20:12:02.104309] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:30.215 [2024-12-08 20:12:02.104362] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:30.215 [2024-12-08 20:12:02.104372] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:30.215 [2024-12-08 20:12:02.104381] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:30.215 20:12:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.215 20:12:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:30.215 20:12:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:30.215 20:12:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:30.215 20:12:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:30.215 20:12:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:30.215 20:12:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:30.215 20:12:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:30.215 20:12:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:30.215 20:12:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:30.215 20:12:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:30.215 20:12:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:30.215 20:12:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:30.215 20:12:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.215 20:12:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:30.215 20:12:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.215 20:12:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:30.215 "name": "Existed_Raid", 00:16:30.215 "uuid": "a6c26741-9752-4e6d-a05d-6d276c4c1313", 00:16:30.215 "strip_size_kb": 0, 00:16:30.215 "state": "configuring", 00:16:30.215 "raid_level": "raid1", 00:16:30.215 "superblock": true, 00:16:30.215 "num_base_bdevs": 2, 00:16:30.215 "num_base_bdevs_discovered": 0, 00:16:30.215 "num_base_bdevs_operational": 2, 00:16:30.215 "base_bdevs_list": [ 00:16:30.215 { 00:16:30.215 "name": "BaseBdev1", 00:16:30.215 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:30.215 "is_configured": false, 00:16:30.215 "data_offset": 0, 00:16:30.215 "data_size": 0 00:16:30.215 }, 00:16:30.215 { 00:16:30.215 "name": "BaseBdev2", 00:16:30.215 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:30.215 "is_configured": false, 00:16:30.215 "data_offset": 0, 00:16:30.215 "data_size": 0 00:16:30.215 } 00:16:30.215 ] 00:16:30.215 }' 00:16:30.215 20:12:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:30.215 20:12:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:30.785 20:12:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:30.785 20:12:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.785 20:12:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:30.785 [2024-12-08 20:12:02.535581] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:30.785 [2024-12-08 20:12:02.535661] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:16:30.785 20:12:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.785 20:12:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:16:30.786 20:12:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.786 20:12:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:30.786 [2024-12-08 20:12:02.547538] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:30.786 [2024-12-08 20:12:02.547636] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:30.786 [2024-12-08 20:12:02.547662] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:30.786 [2024-12-08 20:12:02.547687] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:30.786 20:12:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.786 20:12:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1 00:16:30.786 20:12:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.786 20:12:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:30.786 [2024-12-08 20:12:02.594849] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:30.786 BaseBdev1 00:16:30.786 20:12:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.786 20:12:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:16:30.786 20:12:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:30.786 20:12:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:30.786 20:12:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # local i 00:16:30.786 20:12:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:30.786 20:12:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:30.786 20:12:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:30.786 20:12:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.786 20:12:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:30.786 20:12:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.786 20:12:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:30.786 20:12:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.786 20:12:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:30.786 [ 00:16:30.786 { 00:16:30.786 "name": "BaseBdev1", 00:16:30.786 "aliases": [ 00:16:30.786 "a1a88d51-7f23-4adb-8c18-1f63613c9fca" 00:16:30.786 ], 00:16:30.786 "product_name": "Malloc disk", 00:16:30.786 "block_size": 4096, 00:16:30.786 "num_blocks": 8192, 00:16:30.786 "uuid": "a1a88d51-7f23-4adb-8c18-1f63613c9fca", 00:16:30.786 "assigned_rate_limits": { 00:16:30.786 "rw_ios_per_sec": 0, 00:16:30.786 "rw_mbytes_per_sec": 0, 00:16:30.786 "r_mbytes_per_sec": 0, 00:16:30.786 "w_mbytes_per_sec": 0 00:16:30.786 }, 00:16:30.786 "claimed": true, 00:16:30.786 "claim_type": "exclusive_write", 00:16:30.786 "zoned": false, 00:16:30.786 "supported_io_types": { 00:16:30.786 "read": true, 00:16:30.786 "write": true, 00:16:30.786 "unmap": true, 00:16:30.786 "flush": true, 00:16:30.786 "reset": true, 00:16:30.786 "nvme_admin": false, 00:16:30.786 "nvme_io": false, 00:16:30.786 "nvme_io_md": false, 00:16:30.786 "write_zeroes": true, 00:16:30.786 "zcopy": true, 00:16:30.786 "get_zone_info": false, 00:16:30.786 "zone_management": false, 00:16:30.786 "zone_append": false, 00:16:30.786 "compare": false, 00:16:30.786 "compare_and_write": false, 00:16:30.786 "abort": true, 00:16:30.786 "seek_hole": false, 00:16:30.786 "seek_data": false, 00:16:30.786 "copy": true, 00:16:30.786 "nvme_iov_md": false 00:16:30.786 }, 00:16:30.786 "memory_domains": [ 00:16:30.786 { 00:16:30.786 "dma_device_id": "system", 00:16:30.786 "dma_device_type": 1 00:16:30.786 }, 00:16:30.786 { 00:16:30.786 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:30.786 "dma_device_type": 2 00:16:30.786 } 00:16:30.786 ], 00:16:30.786 "driver_specific": {} 00:16:30.786 } 00:16:30.786 ] 00:16:30.786 20:12:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.786 20:12:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@911 -- # return 0 00:16:30.786 20:12:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:30.786 20:12:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:30.786 20:12:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:30.786 20:12:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:30.786 20:12:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:30.786 20:12:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:30.786 20:12:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:30.786 20:12:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:30.786 20:12:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:30.786 20:12:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:30.786 20:12:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:30.786 20:12:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.786 20:12:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:30.786 20:12:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:30.786 20:12:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.786 20:12:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:30.786 "name": "Existed_Raid", 00:16:30.786 "uuid": "68435fca-2199-4bd7-916b-ddb978a6ef49", 00:16:30.786 "strip_size_kb": 0, 00:16:30.786 "state": "configuring", 00:16:30.786 "raid_level": "raid1", 00:16:30.786 "superblock": true, 00:16:30.786 "num_base_bdevs": 2, 00:16:30.786 "num_base_bdevs_discovered": 1, 00:16:30.786 "num_base_bdevs_operational": 2, 00:16:30.786 "base_bdevs_list": [ 00:16:30.786 { 00:16:30.786 "name": "BaseBdev1", 00:16:30.786 "uuid": "a1a88d51-7f23-4adb-8c18-1f63613c9fca", 00:16:30.786 "is_configured": true, 00:16:30.786 "data_offset": 256, 00:16:30.786 "data_size": 7936 00:16:30.786 }, 00:16:30.786 { 00:16:30.786 "name": "BaseBdev2", 00:16:30.786 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:30.786 "is_configured": false, 00:16:30.786 "data_offset": 0, 00:16:30.786 "data_size": 0 00:16:30.786 } 00:16:30.786 ] 00:16:30.786 }' 00:16:30.786 20:12:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:30.786 20:12:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:31.357 20:12:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:31.357 20:12:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.357 20:12:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:31.357 [2024-12-08 20:12:03.042105] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:31.357 [2024-12-08 20:12:03.042145] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:16:31.357 20:12:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.357 20:12:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:16:31.357 20:12:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.357 20:12:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:31.357 [2024-12-08 20:12:03.054131] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:31.357 [2024-12-08 20:12:03.055860] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:31.357 [2024-12-08 20:12:03.055901] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:31.357 20:12:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.357 20:12:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:16:31.357 20:12:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:31.357 20:12:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:31.357 20:12:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:31.357 20:12:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:31.357 20:12:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:31.357 20:12:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:31.357 20:12:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:31.357 20:12:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:31.357 20:12:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:31.357 20:12:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:31.357 20:12:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:31.357 20:12:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.357 20:12:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:31.357 20:12:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.357 20:12:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:31.357 20:12:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.357 20:12:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:31.357 "name": "Existed_Raid", 00:16:31.357 "uuid": "c7741706-051d-45d9-8386-ddc5db171ab9", 00:16:31.357 "strip_size_kb": 0, 00:16:31.357 "state": "configuring", 00:16:31.357 "raid_level": "raid1", 00:16:31.357 "superblock": true, 00:16:31.357 "num_base_bdevs": 2, 00:16:31.357 "num_base_bdevs_discovered": 1, 00:16:31.357 "num_base_bdevs_operational": 2, 00:16:31.357 "base_bdevs_list": [ 00:16:31.357 { 00:16:31.357 "name": "BaseBdev1", 00:16:31.357 "uuid": "a1a88d51-7f23-4adb-8c18-1f63613c9fca", 00:16:31.357 "is_configured": true, 00:16:31.357 "data_offset": 256, 00:16:31.357 "data_size": 7936 00:16:31.357 }, 00:16:31.357 { 00:16:31.357 "name": "BaseBdev2", 00:16:31.357 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:31.357 "is_configured": false, 00:16:31.357 "data_offset": 0, 00:16:31.357 "data_size": 0 00:16:31.357 } 00:16:31.357 ] 00:16:31.357 }' 00:16:31.357 20:12:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:31.357 20:12:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:31.618 20:12:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2 00:16:31.618 20:12:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.618 20:12:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:31.618 [2024-12-08 20:12:03.558355] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:31.618 [2024-12-08 20:12:03.558715] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:31.618 [2024-12-08 20:12:03.558766] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:31.618 BaseBdev2 00:16:31.618 [2024-12-08 20:12:03.559095] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:16:31.618 [2024-12-08 20:12:03.559291] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:31.618 [2024-12-08 20:12:03.559349] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:16:31.618 [2024-12-08 20:12:03.559619] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:31.618 20:12:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.618 20:12:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:16:31.618 20:12:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:31.618 20:12:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:31.618 20:12:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # local i 00:16:31.618 20:12:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:31.618 20:12:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:31.618 20:12:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:31.618 20:12:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.618 20:12:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:31.618 20:12:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.618 20:12:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:31.618 20:12:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.618 20:12:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:31.618 [ 00:16:31.618 { 00:16:31.618 "name": "BaseBdev2", 00:16:31.618 "aliases": [ 00:16:31.618 "ca4d933e-818a-4ecb-83e1-fffa2badadd7" 00:16:31.618 ], 00:16:31.618 "product_name": "Malloc disk", 00:16:31.618 "block_size": 4096, 00:16:31.618 "num_blocks": 8192, 00:16:31.618 "uuid": "ca4d933e-818a-4ecb-83e1-fffa2badadd7", 00:16:31.618 "assigned_rate_limits": { 00:16:31.618 "rw_ios_per_sec": 0, 00:16:31.618 "rw_mbytes_per_sec": 0, 00:16:31.618 "r_mbytes_per_sec": 0, 00:16:31.618 "w_mbytes_per_sec": 0 00:16:31.618 }, 00:16:31.618 "claimed": true, 00:16:31.618 "claim_type": "exclusive_write", 00:16:31.618 "zoned": false, 00:16:31.618 "supported_io_types": { 00:16:31.618 "read": true, 00:16:31.618 "write": true, 00:16:31.618 "unmap": true, 00:16:31.618 "flush": true, 00:16:31.618 "reset": true, 00:16:31.618 "nvme_admin": false, 00:16:31.618 "nvme_io": false, 00:16:31.618 "nvme_io_md": false, 00:16:31.618 "write_zeroes": true, 00:16:31.618 "zcopy": true, 00:16:31.618 "get_zone_info": false, 00:16:31.618 "zone_management": false, 00:16:31.618 "zone_append": false, 00:16:31.618 "compare": false, 00:16:31.618 "compare_and_write": false, 00:16:31.618 "abort": true, 00:16:31.618 "seek_hole": false, 00:16:31.618 "seek_data": false, 00:16:31.618 "copy": true, 00:16:31.618 "nvme_iov_md": false 00:16:31.618 }, 00:16:31.618 "memory_domains": [ 00:16:31.618 { 00:16:31.618 "dma_device_id": "system", 00:16:31.618 "dma_device_type": 1 00:16:31.618 }, 00:16:31.878 { 00:16:31.878 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:31.878 "dma_device_type": 2 00:16:31.878 } 00:16:31.878 ], 00:16:31.878 "driver_specific": {} 00:16:31.878 } 00:16:31.878 ] 00:16:31.878 20:12:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.878 20:12:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@911 -- # return 0 00:16:31.878 20:12:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:31.878 20:12:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:31.878 20:12:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:16:31.878 20:12:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:31.878 20:12:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:31.878 20:12:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:31.878 20:12:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:31.878 20:12:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:31.878 20:12:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:31.878 20:12:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:31.878 20:12:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:31.878 20:12:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:31.878 20:12:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.878 20:12:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:31.878 20:12:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.878 20:12:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:31.878 20:12:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.878 20:12:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:31.878 "name": "Existed_Raid", 00:16:31.878 "uuid": "c7741706-051d-45d9-8386-ddc5db171ab9", 00:16:31.878 "strip_size_kb": 0, 00:16:31.878 "state": "online", 00:16:31.878 "raid_level": "raid1", 00:16:31.878 "superblock": true, 00:16:31.878 "num_base_bdevs": 2, 00:16:31.878 "num_base_bdevs_discovered": 2, 00:16:31.878 "num_base_bdevs_operational": 2, 00:16:31.878 "base_bdevs_list": [ 00:16:31.878 { 00:16:31.878 "name": "BaseBdev1", 00:16:31.878 "uuid": "a1a88d51-7f23-4adb-8c18-1f63613c9fca", 00:16:31.878 "is_configured": true, 00:16:31.878 "data_offset": 256, 00:16:31.878 "data_size": 7936 00:16:31.878 }, 00:16:31.878 { 00:16:31.878 "name": "BaseBdev2", 00:16:31.878 "uuid": "ca4d933e-818a-4ecb-83e1-fffa2badadd7", 00:16:31.878 "is_configured": true, 00:16:31.878 "data_offset": 256, 00:16:31.878 "data_size": 7936 00:16:31.878 } 00:16:31.878 ] 00:16:31.878 }' 00:16:31.878 20:12:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:31.878 20:12:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:32.146 20:12:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:16:32.146 20:12:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:32.146 20:12:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:32.146 20:12:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:32.146 20:12:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local name 00:16:32.146 20:12:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:32.146 20:12:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:32.146 20:12:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:32.146 20:12:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.146 20:12:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:32.146 [2024-12-08 20:12:04.069764] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:32.146 20:12:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.146 20:12:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:32.146 "name": "Existed_Raid", 00:16:32.146 "aliases": [ 00:16:32.146 "c7741706-051d-45d9-8386-ddc5db171ab9" 00:16:32.146 ], 00:16:32.146 "product_name": "Raid Volume", 00:16:32.146 "block_size": 4096, 00:16:32.146 "num_blocks": 7936, 00:16:32.146 "uuid": "c7741706-051d-45d9-8386-ddc5db171ab9", 00:16:32.146 "assigned_rate_limits": { 00:16:32.146 "rw_ios_per_sec": 0, 00:16:32.146 "rw_mbytes_per_sec": 0, 00:16:32.146 "r_mbytes_per_sec": 0, 00:16:32.146 "w_mbytes_per_sec": 0 00:16:32.146 }, 00:16:32.146 "claimed": false, 00:16:32.146 "zoned": false, 00:16:32.146 "supported_io_types": { 00:16:32.146 "read": true, 00:16:32.146 "write": true, 00:16:32.146 "unmap": false, 00:16:32.146 "flush": false, 00:16:32.146 "reset": true, 00:16:32.146 "nvme_admin": false, 00:16:32.146 "nvme_io": false, 00:16:32.147 "nvme_io_md": false, 00:16:32.147 "write_zeroes": true, 00:16:32.147 "zcopy": false, 00:16:32.147 "get_zone_info": false, 00:16:32.147 "zone_management": false, 00:16:32.147 "zone_append": false, 00:16:32.147 "compare": false, 00:16:32.147 "compare_and_write": false, 00:16:32.147 "abort": false, 00:16:32.147 "seek_hole": false, 00:16:32.147 "seek_data": false, 00:16:32.147 "copy": false, 00:16:32.147 "nvme_iov_md": false 00:16:32.147 }, 00:16:32.147 "memory_domains": [ 00:16:32.147 { 00:16:32.147 "dma_device_id": "system", 00:16:32.147 "dma_device_type": 1 00:16:32.147 }, 00:16:32.147 { 00:16:32.147 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:32.147 "dma_device_type": 2 00:16:32.147 }, 00:16:32.147 { 00:16:32.147 "dma_device_id": "system", 00:16:32.147 "dma_device_type": 1 00:16:32.147 }, 00:16:32.147 { 00:16:32.147 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:32.147 "dma_device_type": 2 00:16:32.147 } 00:16:32.147 ], 00:16:32.147 "driver_specific": { 00:16:32.147 "raid": { 00:16:32.147 "uuid": "c7741706-051d-45d9-8386-ddc5db171ab9", 00:16:32.147 "strip_size_kb": 0, 00:16:32.147 "state": "online", 00:16:32.147 "raid_level": "raid1", 00:16:32.147 "superblock": true, 00:16:32.147 "num_base_bdevs": 2, 00:16:32.147 "num_base_bdevs_discovered": 2, 00:16:32.147 "num_base_bdevs_operational": 2, 00:16:32.147 "base_bdevs_list": [ 00:16:32.147 { 00:16:32.147 "name": "BaseBdev1", 00:16:32.147 "uuid": "a1a88d51-7f23-4adb-8c18-1f63613c9fca", 00:16:32.147 "is_configured": true, 00:16:32.147 "data_offset": 256, 00:16:32.147 "data_size": 7936 00:16:32.147 }, 00:16:32.147 { 00:16:32.147 "name": "BaseBdev2", 00:16:32.147 "uuid": "ca4d933e-818a-4ecb-83e1-fffa2badadd7", 00:16:32.147 "is_configured": true, 00:16:32.147 "data_offset": 256, 00:16:32.147 "data_size": 7936 00:16:32.147 } 00:16:32.147 ] 00:16:32.147 } 00:16:32.147 } 00:16:32.147 }' 00:16:32.147 20:12:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:32.435 20:12:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:16:32.435 BaseBdev2' 00:16:32.435 20:12:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:32.435 20:12:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:16:32.435 20:12:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:32.435 20:12:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:16:32.435 20:12:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.435 20:12:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:32.435 20:12:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:32.435 20:12:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.435 20:12:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:16:32.435 20:12:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:16:32.435 20:12:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:32.435 20:12:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:32.435 20:12:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.435 20:12:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:32.435 20:12:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:32.435 20:12:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.435 20:12:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:16:32.435 20:12:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:16:32.435 20:12:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:32.435 20:12:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.435 20:12:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:32.435 [2024-12-08 20:12:04.269194] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:32.435 20:12:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.435 20:12:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # local expected_state 00:16:32.435 20:12:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:16:32.436 20:12:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:32.436 20:12:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:16:32.436 20:12:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:16:32.436 20:12:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:16:32.436 20:12:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:32.436 20:12:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:32.436 20:12:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:32.436 20:12:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:32.436 20:12:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:32.436 20:12:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:32.436 20:12:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:32.436 20:12:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:32.436 20:12:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:32.436 20:12:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:32.436 20:12:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.436 20:12:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.436 20:12:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:32.436 20:12:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.718 20:12:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:32.718 "name": "Existed_Raid", 00:16:32.718 "uuid": "c7741706-051d-45d9-8386-ddc5db171ab9", 00:16:32.718 "strip_size_kb": 0, 00:16:32.718 "state": "online", 00:16:32.718 "raid_level": "raid1", 00:16:32.718 "superblock": true, 00:16:32.718 "num_base_bdevs": 2, 00:16:32.718 "num_base_bdevs_discovered": 1, 00:16:32.718 "num_base_bdevs_operational": 1, 00:16:32.718 "base_bdevs_list": [ 00:16:32.718 { 00:16:32.718 "name": null, 00:16:32.718 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:32.718 "is_configured": false, 00:16:32.718 "data_offset": 0, 00:16:32.718 "data_size": 7936 00:16:32.718 }, 00:16:32.718 { 00:16:32.718 "name": "BaseBdev2", 00:16:32.718 "uuid": "ca4d933e-818a-4ecb-83e1-fffa2badadd7", 00:16:32.718 "is_configured": true, 00:16:32.718 "data_offset": 256, 00:16:32.718 "data_size": 7936 00:16:32.718 } 00:16:32.718 ] 00:16:32.718 }' 00:16:32.718 20:12:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:32.718 20:12:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:32.978 20:12:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:16:32.978 20:12:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:32.978 20:12:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.978 20:12:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:32.978 20:12:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.978 20:12:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:32.978 20:12:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.978 20:12:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:32.978 20:12:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:32.978 20:12:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:16:32.978 20:12:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.978 20:12:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:32.978 [2024-12-08 20:12:04.806037] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:32.978 [2024-12-08 20:12:04.806177] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:32.978 [2024-12-08 20:12:04.895594] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:32.978 [2024-12-08 20:12:04.895714] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:32.978 [2024-12-08 20:12:04.895756] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:16:32.978 20:12:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.978 20:12:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:32.978 20:12:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:32.978 20:12:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:16:32.978 20:12:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.978 20:12:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.978 20:12:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:32.978 20:12:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.978 20:12:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:16:32.978 20:12:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:16:32.978 20:12:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:16:32.978 20:12:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@326 -- # killprocess 85570 00:16:32.978 20:12:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@954 -- # '[' -z 85570 ']' 00:16:32.978 20:12:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@958 -- # kill -0 85570 00:16:32.978 20:12:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # uname 00:16:33.237 20:12:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:33.237 20:12:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85570 00:16:33.237 20:12:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:33.237 20:12:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:33.237 killing process with pid 85570 00:16:33.237 20:12:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85570' 00:16:33.237 20:12:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@973 -- # kill 85570 00:16:33.237 [2024-12-08 20:12:04.990973] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:33.237 20:12:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@978 -- # wait 85570 00:16:33.237 [2024-12-08 20:12:05.006732] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:34.174 20:12:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@328 -- # return 0 00:16:34.174 00:16:34.174 real 0m4.870s 00:16:34.174 user 0m7.013s 00:16:34.174 sys 0m0.797s 00:16:34.174 ************************************ 00:16:34.174 END TEST raid_state_function_test_sb_4k 00:16:34.174 ************************************ 00:16:34.174 20:12:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:34.174 20:12:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:34.174 20:12:06 bdev_raid -- bdev/bdev_raid.sh@998 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:16:34.174 20:12:06 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:16:34.174 20:12:06 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:34.174 20:12:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:34.174 ************************************ 00:16:34.174 START TEST raid_superblock_test_4k 00:16:34.174 ************************************ 00:16:34.174 20:12:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:16:34.174 20:12:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:16:34.174 20:12:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:16:34.174 20:12:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:16:34.174 20:12:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:16:34.174 20:12:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:16:34.174 20:12:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:16:34.174 20:12:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:16:34.174 20:12:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:16:34.174 20:12:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:16:34.174 20:12:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size 00:16:34.174 20:12:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:16:34.174 20:12:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:16:34.175 20:12:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:16:34.175 20:12:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:16:34.175 20:12:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:16:34.175 20:12:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # raid_pid=85816 00:16:34.175 20:12:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:16:34.175 20:12:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@413 -- # waitforlisten 85816 00:16:34.175 20:12:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@835 -- # '[' -z 85816 ']' 00:16:34.175 20:12:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:34.175 20:12:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:34.175 20:12:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:34.175 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:34.175 20:12:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:34.175 20:12:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:34.434 [2024-12-08 20:12:06.222518] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:16:34.434 [2024-12-08 20:12:06.222720] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85816 ] 00:16:34.434 [2024-12-08 20:12:06.396375] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:34.693 [2024-12-08 20:12:06.499254] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:34.951 [2024-12-08 20:12:06.681720] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:34.951 [2024-12-08 20:12:06.681849] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:35.211 20:12:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:35.211 20:12:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@868 -- # return 0 00:16:35.211 20:12:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:16:35.211 20:12:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:35.211 20:12:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:16:35.211 20:12:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:16:35.211 20:12:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:35.211 20:12:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:35.211 20:12:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:35.211 20:12:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:35.211 20:12:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc1 00:16:35.211 20:12:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.211 20:12:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:35.211 malloc1 00:16:35.211 20:12:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.211 20:12:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:35.211 20:12:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.211 20:12:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:35.211 [2024-12-08 20:12:07.093794] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:35.211 [2024-12-08 20:12:07.093886] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:35.211 [2024-12-08 20:12:07.093924] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:35.211 [2024-12-08 20:12:07.093964] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:35.211 [2024-12-08 20:12:07.096137] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:35.211 [2024-12-08 20:12:07.096206] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:35.211 pt1 00:16:35.211 20:12:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.211 20:12:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:35.211 20:12:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:35.211 20:12:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:16:35.211 20:12:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:16:35.211 20:12:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:35.211 20:12:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:35.211 20:12:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:35.211 20:12:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:35.211 20:12:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc2 00:16:35.211 20:12:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.211 20:12:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:35.211 malloc2 00:16:35.211 20:12:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.211 20:12:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:35.211 20:12:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.211 20:12:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:35.211 [2024-12-08 20:12:07.151069] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:35.211 [2024-12-08 20:12:07.151118] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:35.211 [2024-12-08 20:12:07.151142] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:35.212 [2024-12-08 20:12:07.151151] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:35.212 [2024-12-08 20:12:07.153238] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:35.212 [2024-12-08 20:12:07.153269] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:35.212 pt2 00:16:35.212 20:12:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.212 20:12:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:35.212 20:12:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:35.212 20:12:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:16:35.212 20:12:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.212 20:12:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:35.212 [2024-12-08 20:12:07.163092] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:35.212 [2024-12-08 20:12:07.164894] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:35.212 [2024-12-08 20:12:07.165071] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:35.212 [2024-12-08 20:12:07.165089] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:35.212 [2024-12-08 20:12:07.165339] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:16:35.212 [2024-12-08 20:12:07.165509] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:35.212 [2024-12-08 20:12:07.165531] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:35.212 [2024-12-08 20:12:07.165694] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:35.212 20:12:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.212 20:12:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:35.212 20:12:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:35.212 20:12:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:35.212 20:12:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:35.212 20:12:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:35.212 20:12:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:35.212 20:12:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:35.212 20:12:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:35.212 20:12:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:35.212 20:12:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:35.212 20:12:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.212 20:12:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.212 20:12:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:35.212 20:12:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:35.212 20:12:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.470 20:12:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:35.470 "name": "raid_bdev1", 00:16:35.470 "uuid": "46c2611b-f43a-4f44-b2a2-448d19612830", 00:16:35.470 "strip_size_kb": 0, 00:16:35.470 "state": "online", 00:16:35.470 "raid_level": "raid1", 00:16:35.470 "superblock": true, 00:16:35.470 "num_base_bdevs": 2, 00:16:35.470 "num_base_bdevs_discovered": 2, 00:16:35.470 "num_base_bdevs_operational": 2, 00:16:35.470 "base_bdevs_list": [ 00:16:35.470 { 00:16:35.470 "name": "pt1", 00:16:35.470 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:35.470 "is_configured": true, 00:16:35.470 "data_offset": 256, 00:16:35.470 "data_size": 7936 00:16:35.470 }, 00:16:35.470 { 00:16:35.470 "name": "pt2", 00:16:35.470 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:35.470 "is_configured": true, 00:16:35.470 "data_offset": 256, 00:16:35.470 "data_size": 7936 00:16:35.470 } 00:16:35.470 ] 00:16:35.470 }' 00:16:35.470 20:12:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:35.470 20:12:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:35.729 20:12:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:16:35.729 20:12:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:35.729 20:12:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:35.729 20:12:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:35.729 20:12:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:16:35.729 20:12:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:35.729 20:12:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:35.729 20:12:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.729 20:12:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:35.729 20:12:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:35.729 [2024-12-08 20:12:07.598566] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:35.729 20:12:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.729 20:12:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:35.729 "name": "raid_bdev1", 00:16:35.729 "aliases": [ 00:16:35.729 "46c2611b-f43a-4f44-b2a2-448d19612830" 00:16:35.729 ], 00:16:35.729 "product_name": "Raid Volume", 00:16:35.729 "block_size": 4096, 00:16:35.729 "num_blocks": 7936, 00:16:35.729 "uuid": "46c2611b-f43a-4f44-b2a2-448d19612830", 00:16:35.729 "assigned_rate_limits": { 00:16:35.729 "rw_ios_per_sec": 0, 00:16:35.729 "rw_mbytes_per_sec": 0, 00:16:35.729 "r_mbytes_per_sec": 0, 00:16:35.729 "w_mbytes_per_sec": 0 00:16:35.729 }, 00:16:35.729 "claimed": false, 00:16:35.729 "zoned": false, 00:16:35.729 "supported_io_types": { 00:16:35.729 "read": true, 00:16:35.729 "write": true, 00:16:35.729 "unmap": false, 00:16:35.729 "flush": false, 00:16:35.729 "reset": true, 00:16:35.729 "nvme_admin": false, 00:16:35.729 "nvme_io": false, 00:16:35.729 "nvme_io_md": false, 00:16:35.729 "write_zeroes": true, 00:16:35.729 "zcopy": false, 00:16:35.729 "get_zone_info": false, 00:16:35.729 "zone_management": false, 00:16:35.729 "zone_append": false, 00:16:35.729 "compare": false, 00:16:35.729 "compare_and_write": false, 00:16:35.729 "abort": false, 00:16:35.729 "seek_hole": false, 00:16:35.729 "seek_data": false, 00:16:35.729 "copy": false, 00:16:35.729 "nvme_iov_md": false 00:16:35.729 }, 00:16:35.729 "memory_domains": [ 00:16:35.729 { 00:16:35.729 "dma_device_id": "system", 00:16:35.729 "dma_device_type": 1 00:16:35.729 }, 00:16:35.729 { 00:16:35.729 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:35.729 "dma_device_type": 2 00:16:35.729 }, 00:16:35.729 { 00:16:35.729 "dma_device_id": "system", 00:16:35.729 "dma_device_type": 1 00:16:35.729 }, 00:16:35.729 { 00:16:35.729 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:35.729 "dma_device_type": 2 00:16:35.729 } 00:16:35.729 ], 00:16:35.729 "driver_specific": { 00:16:35.729 "raid": { 00:16:35.729 "uuid": "46c2611b-f43a-4f44-b2a2-448d19612830", 00:16:35.729 "strip_size_kb": 0, 00:16:35.729 "state": "online", 00:16:35.729 "raid_level": "raid1", 00:16:35.729 "superblock": true, 00:16:35.729 "num_base_bdevs": 2, 00:16:35.729 "num_base_bdevs_discovered": 2, 00:16:35.729 "num_base_bdevs_operational": 2, 00:16:35.729 "base_bdevs_list": [ 00:16:35.729 { 00:16:35.729 "name": "pt1", 00:16:35.729 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:35.729 "is_configured": true, 00:16:35.729 "data_offset": 256, 00:16:35.729 "data_size": 7936 00:16:35.729 }, 00:16:35.729 { 00:16:35.729 "name": "pt2", 00:16:35.729 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:35.729 "is_configured": true, 00:16:35.729 "data_offset": 256, 00:16:35.729 "data_size": 7936 00:16:35.729 } 00:16:35.729 ] 00:16:35.729 } 00:16:35.729 } 00:16:35.729 }' 00:16:35.729 20:12:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:35.729 20:12:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:35.729 pt2' 00:16:35.729 20:12:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:35.988 20:12:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:16:35.988 20:12:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:35.988 20:12:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:35.989 20:12:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:35.989 20:12:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.989 20:12:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:35.989 20:12:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.989 20:12:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:16:35.989 20:12:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:16:35.989 20:12:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:35.989 20:12:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:35.989 20:12:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.989 20:12:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:35.989 20:12:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:35.989 20:12:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.989 20:12:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:16:35.989 20:12:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:16:35.989 20:12:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:35.989 20:12:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.989 20:12:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:35.989 20:12:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:16:35.989 [2024-12-08 20:12:07.810178] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:35.989 20:12:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.989 20:12:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=46c2611b-f43a-4f44-b2a2-448d19612830 00:16:35.989 20:12:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@436 -- # '[' -z 46c2611b-f43a-4f44-b2a2-448d19612830 ']' 00:16:35.989 20:12:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:35.989 20:12:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.989 20:12:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:35.989 [2024-12-08 20:12:07.853834] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:35.989 [2024-12-08 20:12:07.853855] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:35.989 [2024-12-08 20:12:07.853925] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:35.989 [2024-12-08 20:12:07.853989] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:35.989 [2024-12-08 20:12:07.854001] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:35.989 20:12:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.989 20:12:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.989 20:12:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.989 20:12:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:35.989 20:12:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:16:35.989 20:12:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.989 20:12:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:16:35.989 20:12:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:16:35.989 20:12:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:35.989 20:12:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:16:35.989 20:12:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.989 20:12:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:35.989 20:12:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.989 20:12:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:35.989 20:12:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:16:35.989 20:12:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.989 20:12:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:35.989 20:12:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.989 20:12:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:16:35.989 20:12:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.989 20:12:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:35.989 20:12:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:35.989 20:12:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.249 20:12:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:16:36.249 20:12:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:16:36.249 20:12:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@652 -- # local es=0 00:16:36.249 20:12:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:16:36.249 20:12:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:36.249 20:12:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:36.249 20:12:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:36.249 20:12:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:36.249 20:12:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:16:36.249 20:12:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.249 20:12:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:36.249 [2024-12-08 20:12:07.981652] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:36.249 [2024-12-08 20:12:07.983489] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:36.249 [2024-12-08 20:12:07.983554] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:16:36.249 [2024-12-08 20:12:07.983621] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:16:36.249 [2024-12-08 20:12:07.983637] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:36.249 [2024-12-08 20:12:07.983658] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:16:36.249 request: 00:16:36.249 { 00:16:36.249 "name": "raid_bdev1", 00:16:36.249 "raid_level": "raid1", 00:16:36.249 "base_bdevs": [ 00:16:36.249 "malloc1", 00:16:36.249 "malloc2" 00:16:36.249 ], 00:16:36.249 "superblock": false, 00:16:36.249 "method": "bdev_raid_create", 00:16:36.249 "req_id": 1 00:16:36.249 } 00:16:36.249 Got JSON-RPC error response 00:16:36.249 response: 00:16:36.249 { 00:16:36.249 "code": -17, 00:16:36.249 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:36.249 } 00:16:36.249 20:12:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:36.249 20:12:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@655 -- # es=1 00:16:36.249 20:12:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:36.249 20:12:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:36.249 20:12:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:36.249 20:12:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:36.249 20:12:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:16:36.249 20:12:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.249 20:12:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:36.249 20:12:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.249 20:12:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:16:36.249 20:12:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:16:36.249 20:12:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:36.249 20:12:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.249 20:12:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:36.249 [2024-12-08 20:12:08.045527] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:36.249 [2024-12-08 20:12:08.045610] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:36.249 [2024-12-08 20:12:08.045644] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:36.249 [2024-12-08 20:12:08.045674] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:36.249 [2024-12-08 20:12:08.047848] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:36.249 [2024-12-08 20:12:08.047926] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:36.249 [2024-12-08 20:12:08.048059] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:36.249 [2024-12-08 20:12:08.048169] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:36.249 pt1 00:16:36.249 20:12:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.249 20:12:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:16:36.249 20:12:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:36.249 20:12:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:36.249 20:12:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:36.249 20:12:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:36.249 20:12:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:36.249 20:12:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:36.249 20:12:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:36.249 20:12:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:36.249 20:12:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:36.249 20:12:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:36.249 20:12:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.249 20:12:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:36.249 20:12:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:36.249 20:12:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.249 20:12:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:36.249 "name": "raid_bdev1", 00:16:36.249 "uuid": "46c2611b-f43a-4f44-b2a2-448d19612830", 00:16:36.249 "strip_size_kb": 0, 00:16:36.249 "state": "configuring", 00:16:36.249 "raid_level": "raid1", 00:16:36.249 "superblock": true, 00:16:36.249 "num_base_bdevs": 2, 00:16:36.249 "num_base_bdevs_discovered": 1, 00:16:36.249 "num_base_bdevs_operational": 2, 00:16:36.249 "base_bdevs_list": [ 00:16:36.249 { 00:16:36.249 "name": "pt1", 00:16:36.249 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:36.249 "is_configured": true, 00:16:36.249 "data_offset": 256, 00:16:36.249 "data_size": 7936 00:16:36.249 }, 00:16:36.249 { 00:16:36.249 "name": null, 00:16:36.249 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:36.249 "is_configured": false, 00:16:36.249 "data_offset": 256, 00:16:36.249 "data_size": 7936 00:16:36.249 } 00:16:36.249 ] 00:16:36.249 }' 00:16:36.249 20:12:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:36.249 20:12:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:36.508 20:12:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:16:36.508 20:12:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:16:36.508 20:12:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:36.508 20:12:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:36.508 20:12:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.508 20:12:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:36.508 [2024-12-08 20:12:08.424920] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:36.508 [2024-12-08 20:12:08.425035] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:36.508 [2024-12-08 20:12:08.425075] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:16:36.508 [2024-12-08 20:12:08.425105] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:36.508 [2024-12-08 20:12:08.425587] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:36.508 [2024-12-08 20:12:08.425646] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:36.509 [2024-12-08 20:12:08.425768] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:36.509 [2024-12-08 20:12:08.425822] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:36.509 [2024-12-08 20:12:08.426017] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:36.509 [2024-12-08 20:12:08.426060] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:36.509 [2024-12-08 20:12:08.426343] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:16:36.509 [2024-12-08 20:12:08.426542] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:36.509 [2024-12-08 20:12:08.426581] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:16:36.509 [2024-12-08 20:12:08.426789] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:36.509 pt2 00:16:36.509 20:12:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.509 20:12:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:36.509 20:12:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:36.509 20:12:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:36.509 20:12:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:36.509 20:12:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:36.509 20:12:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:36.509 20:12:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:36.509 20:12:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:36.509 20:12:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:36.509 20:12:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:36.509 20:12:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:36.509 20:12:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:36.509 20:12:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:36.509 20:12:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:36.509 20:12:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.509 20:12:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:36.509 20:12:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.509 20:12:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:36.509 "name": "raid_bdev1", 00:16:36.509 "uuid": "46c2611b-f43a-4f44-b2a2-448d19612830", 00:16:36.509 "strip_size_kb": 0, 00:16:36.509 "state": "online", 00:16:36.509 "raid_level": "raid1", 00:16:36.509 "superblock": true, 00:16:36.509 "num_base_bdevs": 2, 00:16:36.509 "num_base_bdevs_discovered": 2, 00:16:36.509 "num_base_bdevs_operational": 2, 00:16:36.509 "base_bdevs_list": [ 00:16:36.509 { 00:16:36.509 "name": "pt1", 00:16:36.509 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:36.509 "is_configured": true, 00:16:36.509 "data_offset": 256, 00:16:36.509 "data_size": 7936 00:16:36.509 }, 00:16:36.509 { 00:16:36.509 "name": "pt2", 00:16:36.509 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:36.509 "is_configured": true, 00:16:36.509 "data_offset": 256, 00:16:36.509 "data_size": 7936 00:16:36.509 } 00:16:36.509 ] 00:16:36.509 }' 00:16:36.509 20:12:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:36.509 20:12:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:37.077 20:12:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:16:37.077 20:12:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:37.077 20:12:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:37.077 20:12:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:37.077 20:12:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:16:37.077 20:12:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:37.077 20:12:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:37.077 20:12:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:37.077 20:12:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.077 20:12:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:37.077 [2024-12-08 20:12:08.840410] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:37.077 20:12:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.077 20:12:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:37.077 "name": "raid_bdev1", 00:16:37.077 "aliases": [ 00:16:37.077 "46c2611b-f43a-4f44-b2a2-448d19612830" 00:16:37.077 ], 00:16:37.077 "product_name": "Raid Volume", 00:16:37.077 "block_size": 4096, 00:16:37.077 "num_blocks": 7936, 00:16:37.077 "uuid": "46c2611b-f43a-4f44-b2a2-448d19612830", 00:16:37.077 "assigned_rate_limits": { 00:16:37.077 "rw_ios_per_sec": 0, 00:16:37.077 "rw_mbytes_per_sec": 0, 00:16:37.077 "r_mbytes_per_sec": 0, 00:16:37.077 "w_mbytes_per_sec": 0 00:16:37.077 }, 00:16:37.077 "claimed": false, 00:16:37.077 "zoned": false, 00:16:37.077 "supported_io_types": { 00:16:37.077 "read": true, 00:16:37.077 "write": true, 00:16:37.077 "unmap": false, 00:16:37.077 "flush": false, 00:16:37.077 "reset": true, 00:16:37.077 "nvme_admin": false, 00:16:37.077 "nvme_io": false, 00:16:37.077 "nvme_io_md": false, 00:16:37.077 "write_zeroes": true, 00:16:37.077 "zcopy": false, 00:16:37.077 "get_zone_info": false, 00:16:37.077 "zone_management": false, 00:16:37.077 "zone_append": false, 00:16:37.077 "compare": false, 00:16:37.077 "compare_and_write": false, 00:16:37.077 "abort": false, 00:16:37.077 "seek_hole": false, 00:16:37.077 "seek_data": false, 00:16:37.077 "copy": false, 00:16:37.077 "nvme_iov_md": false 00:16:37.077 }, 00:16:37.077 "memory_domains": [ 00:16:37.077 { 00:16:37.077 "dma_device_id": "system", 00:16:37.077 "dma_device_type": 1 00:16:37.077 }, 00:16:37.077 { 00:16:37.077 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:37.077 "dma_device_type": 2 00:16:37.077 }, 00:16:37.077 { 00:16:37.077 "dma_device_id": "system", 00:16:37.077 "dma_device_type": 1 00:16:37.077 }, 00:16:37.077 { 00:16:37.077 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:37.077 "dma_device_type": 2 00:16:37.077 } 00:16:37.077 ], 00:16:37.077 "driver_specific": { 00:16:37.077 "raid": { 00:16:37.077 "uuid": "46c2611b-f43a-4f44-b2a2-448d19612830", 00:16:37.077 "strip_size_kb": 0, 00:16:37.077 "state": "online", 00:16:37.077 "raid_level": "raid1", 00:16:37.077 "superblock": true, 00:16:37.077 "num_base_bdevs": 2, 00:16:37.077 "num_base_bdevs_discovered": 2, 00:16:37.077 "num_base_bdevs_operational": 2, 00:16:37.077 "base_bdevs_list": [ 00:16:37.077 { 00:16:37.077 "name": "pt1", 00:16:37.077 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:37.077 "is_configured": true, 00:16:37.077 "data_offset": 256, 00:16:37.077 "data_size": 7936 00:16:37.077 }, 00:16:37.077 { 00:16:37.077 "name": "pt2", 00:16:37.077 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:37.077 "is_configured": true, 00:16:37.077 "data_offset": 256, 00:16:37.077 "data_size": 7936 00:16:37.077 } 00:16:37.077 ] 00:16:37.077 } 00:16:37.077 } 00:16:37.077 }' 00:16:37.077 20:12:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:37.077 20:12:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:37.077 pt2' 00:16:37.077 20:12:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:37.077 20:12:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:16:37.077 20:12:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:37.077 20:12:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:37.077 20:12:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:37.077 20:12:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.077 20:12:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:37.077 20:12:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.077 20:12:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:16:37.077 20:12:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:16:37.077 20:12:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:37.077 20:12:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:37.077 20:12:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.077 20:12:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:37.077 20:12:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:37.077 20:12:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.077 20:12:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:16:37.077 20:12:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:16:37.077 20:12:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:16:37.077 20:12:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:37.077 20:12:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.077 20:12:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:37.077 [2024-12-08 20:12:09.032059] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:37.077 20:12:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.337 20:12:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # '[' 46c2611b-f43a-4f44-b2a2-448d19612830 '!=' 46c2611b-f43a-4f44-b2a2-448d19612830 ']' 00:16:37.337 20:12:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:16:37.337 20:12:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:37.337 20:12:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:16:37.337 20:12:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:16:37.337 20:12:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.337 20:12:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:37.337 [2024-12-08 20:12:09.079790] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:16:37.337 20:12:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.337 20:12:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:37.337 20:12:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:37.337 20:12:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:37.337 20:12:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:37.337 20:12:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:37.337 20:12:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:37.337 20:12:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:37.337 20:12:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:37.337 20:12:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:37.337 20:12:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:37.337 20:12:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:37.337 20:12:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.337 20:12:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:37.337 20:12:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:37.337 20:12:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.337 20:12:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:37.337 "name": "raid_bdev1", 00:16:37.337 "uuid": "46c2611b-f43a-4f44-b2a2-448d19612830", 00:16:37.337 "strip_size_kb": 0, 00:16:37.337 "state": "online", 00:16:37.337 "raid_level": "raid1", 00:16:37.337 "superblock": true, 00:16:37.337 "num_base_bdevs": 2, 00:16:37.337 "num_base_bdevs_discovered": 1, 00:16:37.337 "num_base_bdevs_operational": 1, 00:16:37.337 "base_bdevs_list": [ 00:16:37.337 { 00:16:37.337 "name": null, 00:16:37.337 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:37.337 "is_configured": false, 00:16:37.337 "data_offset": 0, 00:16:37.337 "data_size": 7936 00:16:37.337 }, 00:16:37.337 { 00:16:37.337 "name": "pt2", 00:16:37.337 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:37.337 "is_configured": true, 00:16:37.337 "data_offset": 256, 00:16:37.337 "data_size": 7936 00:16:37.337 } 00:16:37.337 ] 00:16:37.337 }' 00:16:37.337 20:12:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:37.337 20:12:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:37.597 20:12:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:37.597 20:12:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.597 20:12:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:37.597 [2024-12-08 20:12:09.475133] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:37.597 [2024-12-08 20:12:09.475199] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:37.597 [2024-12-08 20:12:09.475321] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:37.597 [2024-12-08 20:12:09.475437] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:37.597 [2024-12-08 20:12:09.475503] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:16:37.597 20:12:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.597 20:12:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:16:37.597 20:12:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:37.597 20:12:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.597 20:12:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:37.597 20:12:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.597 20:12:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:16:37.597 20:12:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:16:37.597 20:12:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:16:37.597 20:12:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:37.597 20:12:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:16:37.597 20:12:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.597 20:12:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:37.597 20:12:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.597 20:12:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:37.597 20:12:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:37.597 20:12:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:16:37.597 20:12:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:37.597 20:12:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # i=1 00:16:37.597 20:12:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:37.597 20:12:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.597 20:12:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:37.597 [2024-12-08 20:12:09.539015] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:37.597 [2024-12-08 20:12:09.539062] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:37.597 [2024-12-08 20:12:09.539078] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:16:37.597 [2024-12-08 20:12:09.539088] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:37.597 [2024-12-08 20:12:09.541258] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:37.597 [2024-12-08 20:12:09.541334] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:37.597 [2024-12-08 20:12:09.541426] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:37.597 [2024-12-08 20:12:09.541480] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:37.597 [2024-12-08 20:12:09.541580] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:37.597 [2024-12-08 20:12:09.541591] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:37.597 [2024-12-08 20:12:09.541809] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:37.597 [2024-12-08 20:12:09.541980] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:37.597 [2024-12-08 20:12:09.541990] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:16:37.597 [2024-12-08 20:12:09.542116] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:37.597 pt2 00:16:37.597 20:12:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.597 20:12:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:37.597 20:12:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:37.597 20:12:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:37.597 20:12:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:37.597 20:12:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:37.597 20:12:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:37.597 20:12:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:37.597 20:12:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:37.597 20:12:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:37.597 20:12:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:37.597 20:12:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:37.597 20:12:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:37.597 20:12:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.597 20:12:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:37.597 20:12:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.857 20:12:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:37.857 "name": "raid_bdev1", 00:16:37.857 "uuid": "46c2611b-f43a-4f44-b2a2-448d19612830", 00:16:37.857 "strip_size_kb": 0, 00:16:37.857 "state": "online", 00:16:37.857 "raid_level": "raid1", 00:16:37.857 "superblock": true, 00:16:37.857 "num_base_bdevs": 2, 00:16:37.857 "num_base_bdevs_discovered": 1, 00:16:37.857 "num_base_bdevs_operational": 1, 00:16:37.857 "base_bdevs_list": [ 00:16:37.857 { 00:16:37.857 "name": null, 00:16:37.857 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:37.857 "is_configured": false, 00:16:37.857 "data_offset": 256, 00:16:37.857 "data_size": 7936 00:16:37.857 }, 00:16:37.857 { 00:16:37.857 "name": "pt2", 00:16:37.857 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:37.857 "is_configured": true, 00:16:37.857 "data_offset": 256, 00:16:37.857 "data_size": 7936 00:16:37.857 } 00:16:37.857 ] 00:16:37.857 }' 00:16:37.857 20:12:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:37.857 20:12:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:38.118 20:12:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:38.118 20:12:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.118 20:12:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:38.118 [2024-12-08 20:12:09.958250] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:38.118 [2024-12-08 20:12:09.958316] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:38.118 [2024-12-08 20:12:09.958392] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:38.118 [2024-12-08 20:12:09.958451] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:38.118 [2024-12-08 20:12:09.958516] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:16:38.118 20:12:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.118 20:12:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:38.118 20:12:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.118 20:12:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:38.118 20:12:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:16:38.118 20:12:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.118 20:12:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:16:38.118 20:12:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:16:38.118 20:12:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:16:38.118 20:12:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:38.118 20:12:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.118 20:12:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:38.118 [2024-12-08 20:12:10.018165] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:38.118 [2024-12-08 20:12:10.018247] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:38.118 [2024-12-08 20:12:10.018280] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:16:38.118 [2024-12-08 20:12:10.018306] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:38.118 [2024-12-08 20:12:10.020419] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:38.118 [2024-12-08 20:12:10.020501] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:38.118 [2024-12-08 20:12:10.020594] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:38.118 [2024-12-08 20:12:10.020666] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:38.118 [2024-12-08 20:12:10.020878] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:16:38.118 [2024-12-08 20:12:10.020932] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:38.118 [2024-12-08 20:12:10.021007] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:16:38.118 [2024-12-08 20:12:10.021125] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:38.118 [2024-12-08 20:12:10.021234] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:16:38.118 [2024-12-08 20:12:10.021270] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:38.118 [2024-12-08 20:12:10.021539] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:38.118 [2024-12-08 20:12:10.021735] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:16:38.118 [2024-12-08 20:12:10.021785] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:16:38.118 [2024-12-08 20:12:10.022003] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:38.118 pt1 00:16:38.118 20:12:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.118 20:12:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:16:38.118 20:12:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:38.118 20:12:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:38.118 20:12:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:38.118 20:12:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:38.118 20:12:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:38.118 20:12:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:38.118 20:12:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:38.118 20:12:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:38.118 20:12:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:38.118 20:12:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:38.118 20:12:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:38.118 20:12:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:38.118 20:12:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.118 20:12:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:38.118 20:12:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.118 20:12:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:38.118 "name": "raid_bdev1", 00:16:38.118 "uuid": "46c2611b-f43a-4f44-b2a2-448d19612830", 00:16:38.118 "strip_size_kb": 0, 00:16:38.118 "state": "online", 00:16:38.118 "raid_level": "raid1", 00:16:38.118 "superblock": true, 00:16:38.118 "num_base_bdevs": 2, 00:16:38.118 "num_base_bdevs_discovered": 1, 00:16:38.118 "num_base_bdevs_operational": 1, 00:16:38.118 "base_bdevs_list": [ 00:16:38.118 { 00:16:38.118 "name": null, 00:16:38.118 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:38.118 "is_configured": false, 00:16:38.118 "data_offset": 256, 00:16:38.118 "data_size": 7936 00:16:38.118 }, 00:16:38.118 { 00:16:38.118 "name": "pt2", 00:16:38.118 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:38.118 "is_configured": true, 00:16:38.118 "data_offset": 256, 00:16:38.118 "data_size": 7936 00:16:38.118 } 00:16:38.118 ] 00:16:38.118 }' 00:16:38.118 20:12:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:38.118 20:12:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:38.689 20:12:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:16:38.689 20:12:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:16:38.689 20:12:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.689 20:12:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:38.689 20:12:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.689 20:12:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:16:38.689 20:12:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:16:38.689 20:12:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:38.689 20:12:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.689 20:12:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:38.689 [2024-12-08 20:12:10.481600] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:38.689 20:12:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.689 20:12:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # '[' 46c2611b-f43a-4f44-b2a2-448d19612830 '!=' 46c2611b-f43a-4f44-b2a2-448d19612830 ']' 00:16:38.689 20:12:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@563 -- # killprocess 85816 00:16:38.689 20:12:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@954 -- # '[' -z 85816 ']' 00:16:38.689 20:12:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@958 -- # kill -0 85816 00:16:38.689 20:12:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # uname 00:16:38.689 20:12:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:38.689 20:12:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85816 00:16:38.689 20:12:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:38.689 20:12:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:38.689 20:12:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85816' 00:16:38.689 killing process with pid 85816 00:16:38.689 20:12:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@973 -- # kill 85816 00:16:38.689 [2024-12-08 20:12:10.545422] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:38.689 [2024-12-08 20:12:10.545505] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:38.689 [2024-12-08 20:12:10.545550] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:38.689 [2024-12-08 20:12:10.545563] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:16:38.689 20:12:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@978 -- # wait 85816 00:16:38.949 [2024-12-08 20:12:10.739485] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:39.887 20:12:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@565 -- # return 0 00:16:39.887 00:16:39.887 real 0m5.655s 00:16:39.887 user 0m8.532s 00:16:39.887 sys 0m0.955s 00:16:39.887 20:12:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:39.887 20:12:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:39.887 ************************************ 00:16:39.887 END TEST raid_superblock_test_4k 00:16:39.887 ************************************ 00:16:39.887 20:12:11 bdev_raid -- bdev/bdev_raid.sh@999 -- # '[' true = true ']' 00:16:39.887 20:12:11 bdev_raid -- bdev/bdev_raid.sh@1000 -- # run_test raid_rebuild_test_sb_4k raid_rebuild_test raid1 2 true false true 00:16:39.887 20:12:11 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:16:39.887 20:12:11 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:39.887 20:12:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:39.888 ************************************ 00:16:39.888 START TEST raid_rebuild_test_sb_4k 00:16:39.888 ************************************ 00:16:39.888 20:12:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:16:39.888 20:12:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:16:39.888 20:12:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:16:39.888 20:12:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:16:39.888 20:12:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:16:39.888 20:12:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:40.149 20:12:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:40.149 20:12:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:40.149 20:12:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:40.149 20:12:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:40.149 20:12:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:40.149 20:12:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:40.149 20:12:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:40.149 20:12:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:40.149 20:12:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:40.149 20:12:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:40.149 20:12:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:40.149 20:12:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:40.149 20:12:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:40.149 20:12:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:40.149 20:12:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:40.149 20:12:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:16:40.149 20:12:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:16:40.149 20:12:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:16:40.149 20:12:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:16:40.149 20:12:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@597 -- # raid_pid=86139 00:16:40.149 20:12:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:40.149 20:12:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@598 -- # waitforlisten 86139 00:16:40.149 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:40.149 20:12:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@835 -- # '[' -z 86139 ']' 00:16:40.149 20:12:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:40.149 20:12:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:40.149 20:12:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:40.149 20:12:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:40.149 20:12:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:40.149 [2024-12-08 20:12:11.956394] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:16:40.149 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:40.149 Zero copy mechanism will not be used. 00:16:40.149 [2024-12-08 20:12:11.956602] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86139 ] 00:16:40.410 [2024-12-08 20:12:12.125309] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:40.410 [2024-12-08 20:12:12.229420] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:40.671 [2024-12-08 20:12:12.422469] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:40.671 [2024-12-08 20:12:12.422524] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:40.933 20:12:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:40.933 20:12:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # return 0 00:16:40.933 20:12:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:40.933 20:12:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1_malloc 00:16:40.933 20:12:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.933 20:12:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:40.933 BaseBdev1_malloc 00:16:40.933 20:12:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.933 20:12:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:40.933 20:12:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.933 20:12:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:40.933 [2024-12-08 20:12:12.817566] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:40.933 [2024-12-08 20:12:12.817625] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:40.933 [2024-12-08 20:12:12.817664] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:40.933 [2024-12-08 20:12:12.817675] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:40.933 [2024-12-08 20:12:12.819769] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:40.933 [2024-12-08 20:12:12.819808] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:40.933 BaseBdev1 00:16:40.933 20:12:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.933 20:12:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:40.933 20:12:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2_malloc 00:16:40.933 20:12:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.933 20:12:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:40.933 BaseBdev2_malloc 00:16:40.933 20:12:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.933 20:12:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:40.933 20:12:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.933 20:12:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:40.933 [2024-12-08 20:12:12.868059] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:40.933 [2024-12-08 20:12:12.868129] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:40.933 [2024-12-08 20:12:12.868153] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:40.933 [2024-12-08 20:12:12.868164] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:40.933 [2024-12-08 20:12:12.870150] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:40.933 [2024-12-08 20:12:12.870187] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:40.933 BaseBdev2 00:16:40.933 20:12:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.933 20:12:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -b spare_malloc 00:16:40.933 20:12:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.933 20:12:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:41.194 spare_malloc 00:16:41.194 20:12:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.194 20:12:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:41.194 20:12:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.194 20:12:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:41.194 spare_delay 00:16:41.194 20:12:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.194 20:12:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:41.194 20:12:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.194 20:12:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:41.194 [2024-12-08 20:12:12.964408] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:41.194 [2024-12-08 20:12:12.964514] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:41.194 [2024-12-08 20:12:12.964538] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:16:41.194 [2024-12-08 20:12:12.964549] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:41.194 [2024-12-08 20:12:12.966626] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:41.194 [2024-12-08 20:12:12.966666] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:41.194 spare 00:16:41.194 20:12:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.194 20:12:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:16:41.194 20:12:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.194 20:12:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:41.194 [2024-12-08 20:12:12.976427] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:41.194 [2024-12-08 20:12:12.978148] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:41.194 [2024-12-08 20:12:12.978323] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:41.194 [2024-12-08 20:12:12.978338] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:41.194 [2024-12-08 20:12:12.978559] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:16:41.194 [2024-12-08 20:12:12.978729] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:41.194 [2024-12-08 20:12:12.978738] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:41.194 [2024-12-08 20:12:12.978876] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:41.194 20:12:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.194 20:12:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:41.194 20:12:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:41.194 20:12:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:41.194 20:12:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:41.194 20:12:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:41.194 20:12:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:41.194 20:12:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:41.194 20:12:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:41.194 20:12:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:41.194 20:12:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:41.194 20:12:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:41.194 20:12:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.194 20:12:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:41.194 20:12:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:41.194 20:12:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.194 20:12:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:41.194 "name": "raid_bdev1", 00:16:41.194 "uuid": "cb610f28-62b3-4dfd-9b31-cd63c0b2388d", 00:16:41.194 "strip_size_kb": 0, 00:16:41.194 "state": "online", 00:16:41.194 "raid_level": "raid1", 00:16:41.194 "superblock": true, 00:16:41.195 "num_base_bdevs": 2, 00:16:41.195 "num_base_bdevs_discovered": 2, 00:16:41.195 "num_base_bdevs_operational": 2, 00:16:41.195 "base_bdevs_list": [ 00:16:41.195 { 00:16:41.195 "name": "BaseBdev1", 00:16:41.195 "uuid": "055cf6c2-1859-5e86-b193-dd54ac69383f", 00:16:41.195 "is_configured": true, 00:16:41.195 "data_offset": 256, 00:16:41.195 "data_size": 7936 00:16:41.195 }, 00:16:41.195 { 00:16:41.195 "name": "BaseBdev2", 00:16:41.195 "uuid": "95af0ac7-fd07-5827-94c6-9f71d2261e9a", 00:16:41.195 "is_configured": true, 00:16:41.195 "data_offset": 256, 00:16:41.195 "data_size": 7936 00:16:41.195 } 00:16:41.195 ] 00:16:41.195 }' 00:16:41.195 20:12:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:41.195 20:12:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:41.455 20:12:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:41.455 20:12:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:41.455 20:12:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.455 20:12:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:41.455 [2024-12-08 20:12:13.419895] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:41.716 20:12:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.716 20:12:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:16:41.716 20:12:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:41.716 20:12:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:41.716 20:12:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.716 20:12:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:41.716 20:12:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.716 20:12:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:16:41.716 20:12:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:16:41.716 20:12:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:16:41.716 20:12:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:16:41.716 20:12:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:16:41.716 20:12:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:41.716 20:12:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:16:41.716 20:12:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:41.716 20:12:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:41.716 20:12:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:41.716 20:12:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:16:41.716 20:12:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:41.716 20:12:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:41.716 20:12:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:16:41.976 [2024-12-08 20:12:13.695290] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:41.976 /dev/nbd0 00:16:41.976 20:12:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:41.976 20:12:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:41.976 20:12:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:41.976 20:12:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:16:41.976 20:12:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:41.976 20:12:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:41.976 20:12:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:41.976 20:12:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:16:41.976 20:12:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:41.976 20:12:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:41.976 20:12:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:41.976 1+0 records in 00:16:41.976 1+0 records out 00:16:41.976 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000460102 s, 8.9 MB/s 00:16:41.976 20:12:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:41.976 20:12:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:16:41.976 20:12:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:41.976 20:12:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:41.976 20:12:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:16:41.976 20:12:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:41.976 20:12:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:41.976 20:12:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:16:41.976 20:12:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:16:41.976 20:12:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:16:42.547 7936+0 records in 00:16:42.547 7936+0 records out 00:16:42.547 32505856 bytes (33 MB, 31 MiB) copied, 0.570385 s, 57.0 MB/s 00:16:42.547 20:12:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:42.547 20:12:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:42.547 20:12:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:42.547 20:12:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:42.547 20:12:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:16:42.547 20:12:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:42.547 20:12:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:42.808 20:12:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:42.808 [2024-12-08 20:12:14.538005] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:42.808 20:12:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:42.808 20:12:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:42.808 20:12:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:42.808 20:12:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:42.808 20:12:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:42.808 20:12:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:16:42.808 20:12:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:16:42.808 20:12:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:42.808 20:12:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.808 20:12:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:42.808 [2024-12-08 20:12:14.554090] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:42.808 20:12:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.808 20:12:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:42.808 20:12:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:42.808 20:12:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:42.808 20:12:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:42.808 20:12:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:42.808 20:12:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:42.808 20:12:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:42.808 20:12:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:42.808 20:12:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:42.808 20:12:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:42.808 20:12:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.808 20:12:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.808 20:12:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:42.808 20:12:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:42.808 20:12:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.808 20:12:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:42.808 "name": "raid_bdev1", 00:16:42.808 "uuid": "cb610f28-62b3-4dfd-9b31-cd63c0b2388d", 00:16:42.808 "strip_size_kb": 0, 00:16:42.808 "state": "online", 00:16:42.808 "raid_level": "raid1", 00:16:42.808 "superblock": true, 00:16:42.808 "num_base_bdevs": 2, 00:16:42.808 "num_base_bdevs_discovered": 1, 00:16:42.808 "num_base_bdevs_operational": 1, 00:16:42.808 "base_bdevs_list": [ 00:16:42.808 { 00:16:42.808 "name": null, 00:16:42.808 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:42.808 "is_configured": false, 00:16:42.808 "data_offset": 0, 00:16:42.808 "data_size": 7936 00:16:42.808 }, 00:16:42.808 { 00:16:42.808 "name": "BaseBdev2", 00:16:42.808 "uuid": "95af0ac7-fd07-5827-94c6-9f71d2261e9a", 00:16:42.808 "is_configured": true, 00:16:42.808 "data_offset": 256, 00:16:42.808 "data_size": 7936 00:16:42.808 } 00:16:42.808 ] 00:16:42.808 }' 00:16:42.808 20:12:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:42.808 20:12:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:43.069 20:12:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:43.069 20:12:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.069 20:12:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:43.069 [2024-12-08 20:12:14.981348] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:43.069 [2024-12-08 20:12:14.997845] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:16:43.069 20:12:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.069 20:12:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:43.069 [2024-12-08 20:12:14.999669] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:44.454 20:12:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:44.454 20:12:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:44.454 20:12:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:44.454 20:12:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:44.454 20:12:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:44.454 20:12:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:44.454 20:12:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:44.454 20:12:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.454 20:12:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:44.454 20:12:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.454 20:12:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:44.454 "name": "raid_bdev1", 00:16:44.454 "uuid": "cb610f28-62b3-4dfd-9b31-cd63c0b2388d", 00:16:44.454 "strip_size_kb": 0, 00:16:44.454 "state": "online", 00:16:44.454 "raid_level": "raid1", 00:16:44.454 "superblock": true, 00:16:44.454 "num_base_bdevs": 2, 00:16:44.454 "num_base_bdevs_discovered": 2, 00:16:44.454 "num_base_bdevs_operational": 2, 00:16:44.454 "process": { 00:16:44.454 "type": "rebuild", 00:16:44.454 "target": "spare", 00:16:44.454 "progress": { 00:16:44.454 "blocks": 2560, 00:16:44.454 "percent": 32 00:16:44.454 } 00:16:44.454 }, 00:16:44.454 "base_bdevs_list": [ 00:16:44.454 { 00:16:44.454 "name": "spare", 00:16:44.454 "uuid": "0105dcf8-9706-5971-893b-11baae3fdea4", 00:16:44.454 "is_configured": true, 00:16:44.454 "data_offset": 256, 00:16:44.454 "data_size": 7936 00:16:44.454 }, 00:16:44.454 { 00:16:44.454 "name": "BaseBdev2", 00:16:44.454 "uuid": "95af0ac7-fd07-5827-94c6-9f71d2261e9a", 00:16:44.454 "is_configured": true, 00:16:44.454 "data_offset": 256, 00:16:44.454 "data_size": 7936 00:16:44.454 } 00:16:44.454 ] 00:16:44.454 }' 00:16:44.454 20:12:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:44.454 20:12:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:44.454 20:12:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:44.454 20:12:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:44.454 20:12:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:44.454 20:12:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.454 20:12:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:44.454 [2024-12-08 20:12:16.162965] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:44.454 [2024-12-08 20:12:16.204511] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:44.454 [2024-12-08 20:12:16.204632] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:44.454 [2024-12-08 20:12:16.204673] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:44.454 [2024-12-08 20:12:16.204698] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:44.454 20:12:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.454 20:12:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:44.454 20:12:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:44.454 20:12:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:44.454 20:12:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:44.455 20:12:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:44.455 20:12:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:44.455 20:12:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:44.455 20:12:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:44.455 20:12:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:44.455 20:12:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:44.455 20:12:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:44.455 20:12:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:44.455 20:12:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.455 20:12:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:44.455 20:12:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.455 20:12:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:44.455 "name": "raid_bdev1", 00:16:44.455 "uuid": "cb610f28-62b3-4dfd-9b31-cd63c0b2388d", 00:16:44.455 "strip_size_kb": 0, 00:16:44.455 "state": "online", 00:16:44.455 "raid_level": "raid1", 00:16:44.455 "superblock": true, 00:16:44.455 "num_base_bdevs": 2, 00:16:44.455 "num_base_bdevs_discovered": 1, 00:16:44.455 "num_base_bdevs_operational": 1, 00:16:44.455 "base_bdevs_list": [ 00:16:44.455 { 00:16:44.455 "name": null, 00:16:44.455 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:44.455 "is_configured": false, 00:16:44.455 "data_offset": 0, 00:16:44.455 "data_size": 7936 00:16:44.455 }, 00:16:44.455 { 00:16:44.455 "name": "BaseBdev2", 00:16:44.455 "uuid": "95af0ac7-fd07-5827-94c6-9f71d2261e9a", 00:16:44.455 "is_configured": true, 00:16:44.455 "data_offset": 256, 00:16:44.455 "data_size": 7936 00:16:44.455 } 00:16:44.455 ] 00:16:44.455 }' 00:16:44.455 20:12:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:44.455 20:12:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:45.025 20:12:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:45.025 20:12:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:45.025 20:12:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:45.025 20:12:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:45.025 20:12:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:45.025 20:12:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:45.025 20:12:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:45.025 20:12:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.025 20:12:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:45.025 20:12:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.025 20:12:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:45.025 "name": "raid_bdev1", 00:16:45.025 "uuid": "cb610f28-62b3-4dfd-9b31-cd63c0b2388d", 00:16:45.025 "strip_size_kb": 0, 00:16:45.025 "state": "online", 00:16:45.025 "raid_level": "raid1", 00:16:45.025 "superblock": true, 00:16:45.025 "num_base_bdevs": 2, 00:16:45.025 "num_base_bdevs_discovered": 1, 00:16:45.025 "num_base_bdevs_operational": 1, 00:16:45.025 "base_bdevs_list": [ 00:16:45.025 { 00:16:45.025 "name": null, 00:16:45.025 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:45.025 "is_configured": false, 00:16:45.025 "data_offset": 0, 00:16:45.025 "data_size": 7936 00:16:45.025 }, 00:16:45.025 { 00:16:45.025 "name": "BaseBdev2", 00:16:45.025 "uuid": "95af0ac7-fd07-5827-94c6-9f71d2261e9a", 00:16:45.025 "is_configured": true, 00:16:45.025 "data_offset": 256, 00:16:45.025 "data_size": 7936 00:16:45.025 } 00:16:45.025 ] 00:16:45.025 }' 00:16:45.025 20:12:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:45.025 20:12:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:45.025 20:12:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:45.025 20:12:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:45.025 20:12:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:45.025 20:12:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.025 20:12:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:45.025 [2024-12-08 20:12:16.849678] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:45.025 [2024-12-08 20:12:16.865498] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:16:45.025 20:12:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.025 20:12:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:45.025 [2024-12-08 20:12:16.867279] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:45.965 20:12:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:45.965 20:12:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:45.965 20:12:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:45.965 20:12:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:45.965 20:12:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:45.965 20:12:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:45.965 20:12:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:45.965 20:12:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.965 20:12:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:45.965 20:12:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.965 20:12:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:45.965 "name": "raid_bdev1", 00:16:45.965 "uuid": "cb610f28-62b3-4dfd-9b31-cd63c0b2388d", 00:16:45.965 "strip_size_kb": 0, 00:16:45.965 "state": "online", 00:16:45.965 "raid_level": "raid1", 00:16:45.965 "superblock": true, 00:16:45.965 "num_base_bdevs": 2, 00:16:45.965 "num_base_bdevs_discovered": 2, 00:16:45.965 "num_base_bdevs_operational": 2, 00:16:45.965 "process": { 00:16:45.965 "type": "rebuild", 00:16:45.965 "target": "spare", 00:16:45.965 "progress": { 00:16:45.965 "blocks": 2560, 00:16:45.965 "percent": 32 00:16:45.965 } 00:16:45.965 }, 00:16:45.965 "base_bdevs_list": [ 00:16:45.965 { 00:16:45.965 "name": "spare", 00:16:45.965 "uuid": "0105dcf8-9706-5971-893b-11baae3fdea4", 00:16:45.965 "is_configured": true, 00:16:45.965 "data_offset": 256, 00:16:45.965 "data_size": 7936 00:16:45.965 }, 00:16:45.965 { 00:16:45.965 "name": "BaseBdev2", 00:16:45.965 "uuid": "95af0ac7-fd07-5827-94c6-9f71d2261e9a", 00:16:45.965 "is_configured": true, 00:16:45.965 "data_offset": 256, 00:16:45.965 "data_size": 7936 00:16:45.965 } 00:16:45.965 ] 00:16:45.965 }' 00:16:45.965 20:12:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:46.225 20:12:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:46.225 20:12:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:46.225 20:12:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:46.225 20:12:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:16:46.225 20:12:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:16:46.225 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:16:46.225 20:12:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:16:46.225 20:12:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:16:46.225 20:12:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:16:46.225 20:12:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # local timeout=660 00:16:46.225 20:12:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:46.225 20:12:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:46.225 20:12:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:46.225 20:12:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:46.225 20:12:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:46.225 20:12:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:46.225 20:12:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:46.225 20:12:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:46.225 20:12:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.225 20:12:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:46.225 20:12:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.225 20:12:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:46.225 "name": "raid_bdev1", 00:16:46.225 "uuid": "cb610f28-62b3-4dfd-9b31-cd63c0b2388d", 00:16:46.225 "strip_size_kb": 0, 00:16:46.226 "state": "online", 00:16:46.226 "raid_level": "raid1", 00:16:46.226 "superblock": true, 00:16:46.226 "num_base_bdevs": 2, 00:16:46.226 "num_base_bdevs_discovered": 2, 00:16:46.226 "num_base_bdevs_operational": 2, 00:16:46.226 "process": { 00:16:46.226 "type": "rebuild", 00:16:46.226 "target": "spare", 00:16:46.226 "progress": { 00:16:46.226 "blocks": 2816, 00:16:46.226 "percent": 35 00:16:46.226 } 00:16:46.226 }, 00:16:46.226 "base_bdevs_list": [ 00:16:46.226 { 00:16:46.226 "name": "spare", 00:16:46.226 "uuid": "0105dcf8-9706-5971-893b-11baae3fdea4", 00:16:46.226 "is_configured": true, 00:16:46.226 "data_offset": 256, 00:16:46.226 "data_size": 7936 00:16:46.226 }, 00:16:46.226 { 00:16:46.226 "name": "BaseBdev2", 00:16:46.226 "uuid": "95af0ac7-fd07-5827-94c6-9f71d2261e9a", 00:16:46.226 "is_configured": true, 00:16:46.226 "data_offset": 256, 00:16:46.226 "data_size": 7936 00:16:46.226 } 00:16:46.226 ] 00:16:46.226 }' 00:16:46.226 20:12:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:46.226 20:12:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:46.226 20:12:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:46.226 20:12:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:46.226 20:12:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:47.607 20:12:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:47.607 20:12:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:47.607 20:12:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:47.607 20:12:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:47.607 20:12:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:47.607 20:12:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:47.607 20:12:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:47.607 20:12:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:47.607 20:12:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.607 20:12:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:47.607 20:12:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.607 20:12:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:47.607 "name": "raid_bdev1", 00:16:47.607 "uuid": "cb610f28-62b3-4dfd-9b31-cd63c0b2388d", 00:16:47.607 "strip_size_kb": 0, 00:16:47.607 "state": "online", 00:16:47.607 "raid_level": "raid1", 00:16:47.607 "superblock": true, 00:16:47.607 "num_base_bdevs": 2, 00:16:47.607 "num_base_bdevs_discovered": 2, 00:16:47.607 "num_base_bdevs_operational": 2, 00:16:47.607 "process": { 00:16:47.607 "type": "rebuild", 00:16:47.607 "target": "spare", 00:16:47.607 "progress": { 00:16:47.607 "blocks": 5632, 00:16:47.607 "percent": 70 00:16:47.607 } 00:16:47.607 }, 00:16:47.607 "base_bdevs_list": [ 00:16:47.607 { 00:16:47.607 "name": "spare", 00:16:47.607 "uuid": "0105dcf8-9706-5971-893b-11baae3fdea4", 00:16:47.607 "is_configured": true, 00:16:47.607 "data_offset": 256, 00:16:47.607 "data_size": 7936 00:16:47.607 }, 00:16:47.607 { 00:16:47.607 "name": "BaseBdev2", 00:16:47.607 "uuid": "95af0ac7-fd07-5827-94c6-9f71d2261e9a", 00:16:47.607 "is_configured": true, 00:16:47.607 "data_offset": 256, 00:16:47.607 "data_size": 7936 00:16:47.607 } 00:16:47.607 ] 00:16:47.607 }' 00:16:47.607 20:12:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:47.607 20:12:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:47.607 20:12:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:47.607 20:12:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:47.607 20:12:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:48.177 [2024-12-08 20:12:19.979069] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:48.177 [2024-12-08 20:12:19.979180] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:48.177 [2024-12-08 20:12:19.979345] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:48.437 20:12:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:48.437 20:12:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:48.437 20:12:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:48.437 20:12:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:48.437 20:12:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:48.437 20:12:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:48.437 20:12:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:48.437 20:12:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.437 20:12:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:48.437 20:12:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:48.437 20:12:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.437 20:12:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:48.437 "name": "raid_bdev1", 00:16:48.437 "uuid": "cb610f28-62b3-4dfd-9b31-cd63c0b2388d", 00:16:48.437 "strip_size_kb": 0, 00:16:48.437 "state": "online", 00:16:48.437 "raid_level": "raid1", 00:16:48.437 "superblock": true, 00:16:48.437 "num_base_bdevs": 2, 00:16:48.437 "num_base_bdevs_discovered": 2, 00:16:48.437 "num_base_bdevs_operational": 2, 00:16:48.437 "base_bdevs_list": [ 00:16:48.437 { 00:16:48.437 "name": "spare", 00:16:48.437 "uuid": "0105dcf8-9706-5971-893b-11baae3fdea4", 00:16:48.437 "is_configured": true, 00:16:48.437 "data_offset": 256, 00:16:48.437 "data_size": 7936 00:16:48.437 }, 00:16:48.437 { 00:16:48.437 "name": "BaseBdev2", 00:16:48.437 "uuid": "95af0ac7-fd07-5827-94c6-9f71d2261e9a", 00:16:48.437 "is_configured": true, 00:16:48.437 "data_offset": 256, 00:16:48.437 "data_size": 7936 00:16:48.437 } 00:16:48.437 ] 00:16:48.437 }' 00:16:48.437 20:12:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:48.437 20:12:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:48.437 20:12:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:48.697 20:12:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:48.697 20:12:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@709 -- # break 00:16:48.697 20:12:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:48.697 20:12:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:48.697 20:12:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:48.697 20:12:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:48.697 20:12:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:48.697 20:12:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:48.697 20:12:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:48.697 20:12:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.697 20:12:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:48.697 20:12:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.697 20:12:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:48.697 "name": "raid_bdev1", 00:16:48.697 "uuid": "cb610f28-62b3-4dfd-9b31-cd63c0b2388d", 00:16:48.697 "strip_size_kb": 0, 00:16:48.697 "state": "online", 00:16:48.697 "raid_level": "raid1", 00:16:48.697 "superblock": true, 00:16:48.697 "num_base_bdevs": 2, 00:16:48.697 "num_base_bdevs_discovered": 2, 00:16:48.697 "num_base_bdevs_operational": 2, 00:16:48.697 "base_bdevs_list": [ 00:16:48.697 { 00:16:48.697 "name": "spare", 00:16:48.697 "uuid": "0105dcf8-9706-5971-893b-11baae3fdea4", 00:16:48.697 "is_configured": true, 00:16:48.697 "data_offset": 256, 00:16:48.697 "data_size": 7936 00:16:48.697 }, 00:16:48.697 { 00:16:48.697 "name": "BaseBdev2", 00:16:48.697 "uuid": "95af0ac7-fd07-5827-94c6-9f71d2261e9a", 00:16:48.697 "is_configured": true, 00:16:48.697 "data_offset": 256, 00:16:48.697 "data_size": 7936 00:16:48.697 } 00:16:48.697 ] 00:16:48.697 }' 00:16:48.697 20:12:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:48.697 20:12:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:48.697 20:12:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:48.697 20:12:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:48.697 20:12:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:48.697 20:12:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:48.697 20:12:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:48.697 20:12:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:48.697 20:12:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:48.697 20:12:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:48.697 20:12:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:48.697 20:12:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:48.697 20:12:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:48.697 20:12:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:48.697 20:12:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:48.697 20:12:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.697 20:12:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:48.697 20:12:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:48.697 20:12:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.697 20:12:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:48.697 "name": "raid_bdev1", 00:16:48.697 "uuid": "cb610f28-62b3-4dfd-9b31-cd63c0b2388d", 00:16:48.697 "strip_size_kb": 0, 00:16:48.697 "state": "online", 00:16:48.697 "raid_level": "raid1", 00:16:48.697 "superblock": true, 00:16:48.697 "num_base_bdevs": 2, 00:16:48.697 "num_base_bdevs_discovered": 2, 00:16:48.697 "num_base_bdevs_operational": 2, 00:16:48.697 "base_bdevs_list": [ 00:16:48.697 { 00:16:48.697 "name": "spare", 00:16:48.697 "uuid": "0105dcf8-9706-5971-893b-11baae3fdea4", 00:16:48.697 "is_configured": true, 00:16:48.697 "data_offset": 256, 00:16:48.697 "data_size": 7936 00:16:48.697 }, 00:16:48.697 { 00:16:48.697 "name": "BaseBdev2", 00:16:48.697 "uuid": "95af0ac7-fd07-5827-94c6-9f71d2261e9a", 00:16:48.697 "is_configured": true, 00:16:48.697 "data_offset": 256, 00:16:48.697 "data_size": 7936 00:16:48.697 } 00:16:48.697 ] 00:16:48.697 }' 00:16:48.697 20:12:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:48.697 20:12:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:49.265 20:12:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:49.265 20:12:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.265 20:12:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:49.265 [2024-12-08 20:12:21.006806] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:49.265 [2024-12-08 20:12:21.006877] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:49.265 [2024-12-08 20:12:21.007000] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:49.265 [2024-12-08 20:12:21.007120] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:49.265 [2024-12-08 20:12:21.007170] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:49.265 20:12:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.265 20:12:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:49.265 20:12:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # jq length 00:16:49.266 20:12:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.266 20:12:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:49.266 20:12:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.266 20:12:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:49.266 20:12:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:49.266 20:12:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:16:49.266 20:12:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:16:49.266 20:12:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:49.266 20:12:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:16:49.266 20:12:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:49.266 20:12:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:49.266 20:12:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:49.266 20:12:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:16:49.266 20:12:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:49.266 20:12:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:49.266 20:12:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:16:49.526 /dev/nbd0 00:16:49.526 20:12:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:49.526 20:12:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:49.526 20:12:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:49.526 20:12:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:16:49.526 20:12:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:49.526 20:12:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:49.526 20:12:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:49.526 20:12:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:16:49.526 20:12:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:49.526 20:12:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:49.526 20:12:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:49.526 1+0 records in 00:16:49.526 1+0 records out 00:16:49.526 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000258764 s, 15.8 MB/s 00:16:49.526 20:12:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:49.526 20:12:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:16:49.526 20:12:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:49.526 20:12:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:49.526 20:12:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:16:49.526 20:12:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:49.526 20:12:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:49.526 20:12:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:16:49.526 /dev/nbd1 00:16:49.526 20:12:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:49.526 20:12:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:49.526 20:12:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:16:49.526 20:12:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:16:49.526 20:12:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:49.786 20:12:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:49.786 20:12:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:16:49.786 20:12:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:16:49.786 20:12:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:49.786 20:12:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:49.786 20:12:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:49.786 1+0 records in 00:16:49.786 1+0 records out 00:16:49.786 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000295756 s, 13.8 MB/s 00:16:49.786 20:12:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:49.786 20:12:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:16:49.786 20:12:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:49.786 20:12:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:49.786 20:12:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:16:49.786 20:12:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:49.786 20:12:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:49.786 20:12:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:16:49.786 20:12:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:16:49.786 20:12:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:49.786 20:12:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:49.786 20:12:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:49.786 20:12:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:16:49.786 20:12:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:49.786 20:12:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:50.046 20:12:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:50.046 20:12:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:50.046 20:12:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:50.046 20:12:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:50.046 20:12:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:50.046 20:12:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:50.046 20:12:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:16:50.046 20:12:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:16:50.046 20:12:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:50.046 20:12:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:50.305 20:12:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:50.305 20:12:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:50.305 20:12:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:50.305 20:12:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:50.305 20:12:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:50.305 20:12:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:50.305 20:12:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:16:50.305 20:12:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:16:50.306 20:12:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:16:50.306 20:12:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:16:50.306 20:12:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.306 20:12:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:50.306 20:12:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.306 20:12:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:50.306 20:12:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.306 20:12:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:50.306 [2024-12-08 20:12:22.112731] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:50.306 [2024-12-08 20:12:22.112820] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:50.306 [2024-12-08 20:12:22.112851] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:16:50.306 [2024-12-08 20:12:22.112860] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:50.306 [2024-12-08 20:12:22.115093] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:50.306 [2024-12-08 20:12:22.115161] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:50.306 [2024-12-08 20:12:22.115279] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:50.306 [2024-12-08 20:12:22.115357] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:50.306 [2024-12-08 20:12:22.115606] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:50.306 spare 00:16:50.306 20:12:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.306 20:12:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:16:50.306 20:12:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.306 20:12:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:50.306 [2024-12-08 20:12:22.215555] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:16:50.306 [2024-12-08 20:12:22.215616] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:50.306 [2024-12-08 20:12:22.215895] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:16:50.306 [2024-12-08 20:12:22.216083] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:16:50.306 [2024-12-08 20:12:22.216095] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:16:50.306 [2024-12-08 20:12:22.216262] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:50.306 20:12:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.306 20:12:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:50.306 20:12:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:50.306 20:12:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:50.306 20:12:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:50.306 20:12:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:50.306 20:12:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:50.306 20:12:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:50.306 20:12:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:50.306 20:12:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:50.306 20:12:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:50.306 20:12:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:50.306 20:12:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:50.306 20:12:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.306 20:12:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:50.306 20:12:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.306 20:12:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:50.306 "name": "raid_bdev1", 00:16:50.306 "uuid": "cb610f28-62b3-4dfd-9b31-cd63c0b2388d", 00:16:50.306 "strip_size_kb": 0, 00:16:50.306 "state": "online", 00:16:50.306 "raid_level": "raid1", 00:16:50.306 "superblock": true, 00:16:50.306 "num_base_bdevs": 2, 00:16:50.306 "num_base_bdevs_discovered": 2, 00:16:50.306 "num_base_bdevs_operational": 2, 00:16:50.306 "base_bdevs_list": [ 00:16:50.306 { 00:16:50.306 "name": "spare", 00:16:50.306 "uuid": "0105dcf8-9706-5971-893b-11baae3fdea4", 00:16:50.306 "is_configured": true, 00:16:50.306 "data_offset": 256, 00:16:50.306 "data_size": 7936 00:16:50.306 }, 00:16:50.306 { 00:16:50.306 "name": "BaseBdev2", 00:16:50.306 "uuid": "95af0ac7-fd07-5827-94c6-9f71d2261e9a", 00:16:50.306 "is_configured": true, 00:16:50.306 "data_offset": 256, 00:16:50.306 "data_size": 7936 00:16:50.306 } 00:16:50.306 ] 00:16:50.306 }' 00:16:50.306 20:12:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:50.306 20:12:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:50.873 20:12:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:50.873 20:12:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:50.873 20:12:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:50.873 20:12:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:50.873 20:12:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:50.873 20:12:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:50.873 20:12:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:50.873 20:12:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.873 20:12:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:50.873 20:12:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.873 20:12:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:50.873 "name": "raid_bdev1", 00:16:50.873 "uuid": "cb610f28-62b3-4dfd-9b31-cd63c0b2388d", 00:16:50.873 "strip_size_kb": 0, 00:16:50.873 "state": "online", 00:16:50.873 "raid_level": "raid1", 00:16:50.873 "superblock": true, 00:16:50.873 "num_base_bdevs": 2, 00:16:50.873 "num_base_bdevs_discovered": 2, 00:16:50.873 "num_base_bdevs_operational": 2, 00:16:50.873 "base_bdevs_list": [ 00:16:50.873 { 00:16:50.873 "name": "spare", 00:16:50.873 "uuid": "0105dcf8-9706-5971-893b-11baae3fdea4", 00:16:50.873 "is_configured": true, 00:16:50.873 "data_offset": 256, 00:16:50.873 "data_size": 7936 00:16:50.873 }, 00:16:50.873 { 00:16:50.873 "name": "BaseBdev2", 00:16:50.873 "uuid": "95af0ac7-fd07-5827-94c6-9f71d2261e9a", 00:16:50.873 "is_configured": true, 00:16:50.873 "data_offset": 256, 00:16:50.873 "data_size": 7936 00:16:50.873 } 00:16:50.873 ] 00:16:50.873 }' 00:16:50.873 20:12:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:50.873 20:12:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:50.873 20:12:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:50.873 20:12:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:50.873 20:12:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:16:50.873 20:12:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:50.873 20:12:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.873 20:12:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:50.873 20:12:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.873 20:12:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:16:50.873 20:12:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:50.873 20:12:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.873 20:12:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:50.873 [2024-12-08 20:12:22.831582] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:50.873 20:12:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.873 20:12:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:50.873 20:12:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:50.873 20:12:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:50.873 20:12:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:50.873 20:12:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:50.873 20:12:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:50.873 20:12:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:50.873 20:12:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:50.874 20:12:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:50.874 20:12:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:50.874 20:12:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:50.874 20:12:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:50.874 20:12:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.874 20:12:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:51.133 20:12:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.133 20:12:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:51.133 "name": "raid_bdev1", 00:16:51.133 "uuid": "cb610f28-62b3-4dfd-9b31-cd63c0b2388d", 00:16:51.133 "strip_size_kb": 0, 00:16:51.133 "state": "online", 00:16:51.133 "raid_level": "raid1", 00:16:51.133 "superblock": true, 00:16:51.133 "num_base_bdevs": 2, 00:16:51.133 "num_base_bdevs_discovered": 1, 00:16:51.133 "num_base_bdevs_operational": 1, 00:16:51.133 "base_bdevs_list": [ 00:16:51.133 { 00:16:51.133 "name": null, 00:16:51.133 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:51.133 "is_configured": false, 00:16:51.133 "data_offset": 0, 00:16:51.133 "data_size": 7936 00:16:51.133 }, 00:16:51.133 { 00:16:51.133 "name": "BaseBdev2", 00:16:51.133 "uuid": "95af0ac7-fd07-5827-94c6-9f71d2261e9a", 00:16:51.133 "is_configured": true, 00:16:51.133 "data_offset": 256, 00:16:51.133 "data_size": 7936 00:16:51.133 } 00:16:51.133 ] 00:16:51.133 }' 00:16:51.133 20:12:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:51.133 20:12:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:51.392 20:12:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:51.392 20:12:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.392 20:12:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:51.392 [2024-12-08 20:12:23.298776] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:51.392 [2024-12-08 20:12:23.299048] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:51.392 [2024-12-08 20:12:23.299114] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:51.392 [2024-12-08 20:12:23.299190] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:51.392 [2024-12-08 20:12:23.314717] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:16:51.392 20:12:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.392 20:12:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@757 -- # sleep 1 00:16:51.392 [2024-12-08 20:12:23.316577] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:52.771 20:12:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:52.771 20:12:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:52.771 20:12:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:52.771 20:12:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:52.771 20:12:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:52.771 20:12:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:52.771 20:12:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:52.771 20:12:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.771 20:12:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:52.771 20:12:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.772 20:12:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:52.772 "name": "raid_bdev1", 00:16:52.772 "uuid": "cb610f28-62b3-4dfd-9b31-cd63c0b2388d", 00:16:52.772 "strip_size_kb": 0, 00:16:52.772 "state": "online", 00:16:52.772 "raid_level": "raid1", 00:16:52.772 "superblock": true, 00:16:52.772 "num_base_bdevs": 2, 00:16:52.772 "num_base_bdevs_discovered": 2, 00:16:52.772 "num_base_bdevs_operational": 2, 00:16:52.772 "process": { 00:16:52.772 "type": "rebuild", 00:16:52.772 "target": "spare", 00:16:52.772 "progress": { 00:16:52.772 "blocks": 2560, 00:16:52.772 "percent": 32 00:16:52.772 } 00:16:52.772 }, 00:16:52.772 "base_bdevs_list": [ 00:16:52.772 { 00:16:52.772 "name": "spare", 00:16:52.772 "uuid": "0105dcf8-9706-5971-893b-11baae3fdea4", 00:16:52.772 "is_configured": true, 00:16:52.772 "data_offset": 256, 00:16:52.772 "data_size": 7936 00:16:52.772 }, 00:16:52.772 { 00:16:52.772 "name": "BaseBdev2", 00:16:52.772 "uuid": "95af0ac7-fd07-5827-94c6-9f71d2261e9a", 00:16:52.772 "is_configured": true, 00:16:52.772 "data_offset": 256, 00:16:52.772 "data_size": 7936 00:16:52.772 } 00:16:52.772 ] 00:16:52.772 }' 00:16:52.772 20:12:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:52.772 20:12:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:52.772 20:12:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:52.772 20:12:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:52.772 20:12:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:16:52.772 20:12:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.772 20:12:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:52.772 [2024-12-08 20:12:24.480339] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:52.772 [2024-12-08 20:12:24.521256] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:52.772 [2024-12-08 20:12:24.521363] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:52.772 [2024-12-08 20:12:24.521379] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:52.772 [2024-12-08 20:12:24.521388] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:52.772 20:12:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.772 20:12:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:52.772 20:12:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:52.772 20:12:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:52.772 20:12:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:52.772 20:12:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:52.772 20:12:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:52.772 20:12:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:52.772 20:12:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:52.772 20:12:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:52.772 20:12:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:52.772 20:12:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:52.772 20:12:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:52.772 20:12:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.772 20:12:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:52.772 20:12:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.772 20:12:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:52.772 "name": "raid_bdev1", 00:16:52.772 "uuid": "cb610f28-62b3-4dfd-9b31-cd63c0b2388d", 00:16:52.772 "strip_size_kb": 0, 00:16:52.772 "state": "online", 00:16:52.772 "raid_level": "raid1", 00:16:52.772 "superblock": true, 00:16:52.772 "num_base_bdevs": 2, 00:16:52.772 "num_base_bdevs_discovered": 1, 00:16:52.772 "num_base_bdevs_operational": 1, 00:16:52.772 "base_bdevs_list": [ 00:16:52.772 { 00:16:52.772 "name": null, 00:16:52.772 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:52.772 "is_configured": false, 00:16:52.772 "data_offset": 0, 00:16:52.772 "data_size": 7936 00:16:52.772 }, 00:16:52.772 { 00:16:52.772 "name": "BaseBdev2", 00:16:52.772 "uuid": "95af0ac7-fd07-5827-94c6-9f71d2261e9a", 00:16:52.772 "is_configured": true, 00:16:52.772 "data_offset": 256, 00:16:52.772 "data_size": 7936 00:16:52.772 } 00:16:52.772 ] 00:16:52.772 }' 00:16:52.772 20:12:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:52.772 20:12:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:53.031 20:12:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:53.031 20:12:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.031 20:12:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:53.031 [2024-12-08 20:12:25.006054] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:53.031 [2024-12-08 20:12:25.006165] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:53.031 [2024-12-08 20:12:25.006203] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:16:53.031 [2024-12-08 20:12:25.006234] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:53.031 [2024-12-08 20:12:25.006768] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:53.031 [2024-12-08 20:12:25.006836] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:53.031 [2024-12-08 20:12:25.006994] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:53.031 [2024-12-08 20:12:25.007041] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:53.031 [2024-12-08 20:12:25.007094] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:53.031 [2024-12-08 20:12:25.007178] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:53.299 [2024-12-08 20:12:25.022394] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:16:53.299 spare 00:16:53.299 20:12:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.299 20:12:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@764 -- # sleep 1 00:16:53.299 [2024-12-08 20:12:25.024370] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:54.238 20:12:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:54.238 20:12:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:54.238 20:12:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:54.238 20:12:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:54.238 20:12:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:54.238 20:12:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:54.238 20:12:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:54.238 20:12:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.238 20:12:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:54.238 20:12:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.238 20:12:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:54.238 "name": "raid_bdev1", 00:16:54.238 "uuid": "cb610f28-62b3-4dfd-9b31-cd63c0b2388d", 00:16:54.238 "strip_size_kb": 0, 00:16:54.238 "state": "online", 00:16:54.238 "raid_level": "raid1", 00:16:54.238 "superblock": true, 00:16:54.238 "num_base_bdevs": 2, 00:16:54.238 "num_base_bdevs_discovered": 2, 00:16:54.238 "num_base_bdevs_operational": 2, 00:16:54.238 "process": { 00:16:54.238 "type": "rebuild", 00:16:54.238 "target": "spare", 00:16:54.238 "progress": { 00:16:54.238 "blocks": 2560, 00:16:54.238 "percent": 32 00:16:54.238 } 00:16:54.238 }, 00:16:54.238 "base_bdevs_list": [ 00:16:54.238 { 00:16:54.238 "name": "spare", 00:16:54.238 "uuid": "0105dcf8-9706-5971-893b-11baae3fdea4", 00:16:54.238 "is_configured": true, 00:16:54.238 "data_offset": 256, 00:16:54.238 "data_size": 7936 00:16:54.238 }, 00:16:54.238 { 00:16:54.238 "name": "BaseBdev2", 00:16:54.238 "uuid": "95af0ac7-fd07-5827-94c6-9f71d2261e9a", 00:16:54.238 "is_configured": true, 00:16:54.238 "data_offset": 256, 00:16:54.238 "data_size": 7936 00:16:54.238 } 00:16:54.238 ] 00:16:54.238 }' 00:16:54.238 20:12:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:54.238 20:12:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:54.238 20:12:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:54.238 20:12:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:54.238 20:12:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:16:54.238 20:12:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.238 20:12:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:54.238 [2024-12-08 20:12:26.183679] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:54.498 [2024-12-08 20:12:26.229088] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:54.498 [2024-12-08 20:12:26.229142] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:54.498 [2024-12-08 20:12:26.229158] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:54.498 [2024-12-08 20:12:26.229181] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:54.498 20:12:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.498 20:12:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:54.498 20:12:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:54.498 20:12:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:54.498 20:12:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:54.498 20:12:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:54.498 20:12:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:54.498 20:12:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:54.498 20:12:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:54.498 20:12:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:54.498 20:12:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:54.498 20:12:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:54.498 20:12:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.498 20:12:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:54.498 20:12:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:54.498 20:12:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.498 20:12:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:54.498 "name": "raid_bdev1", 00:16:54.498 "uuid": "cb610f28-62b3-4dfd-9b31-cd63c0b2388d", 00:16:54.498 "strip_size_kb": 0, 00:16:54.498 "state": "online", 00:16:54.498 "raid_level": "raid1", 00:16:54.498 "superblock": true, 00:16:54.498 "num_base_bdevs": 2, 00:16:54.498 "num_base_bdevs_discovered": 1, 00:16:54.498 "num_base_bdevs_operational": 1, 00:16:54.498 "base_bdevs_list": [ 00:16:54.498 { 00:16:54.498 "name": null, 00:16:54.498 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:54.498 "is_configured": false, 00:16:54.498 "data_offset": 0, 00:16:54.498 "data_size": 7936 00:16:54.498 }, 00:16:54.498 { 00:16:54.498 "name": "BaseBdev2", 00:16:54.498 "uuid": "95af0ac7-fd07-5827-94c6-9f71d2261e9a", 00:16:54.498 "is_configured": true, 00:16:54.498 "data_offset": 256, 00:16:54.498 "data_size": 7936 00:16:54.498 } 00:16:54.498 ] 00:16:54.498 }' 00:16:54.498 20:12:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:54.499 20:12:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:54.758 20:12:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:54.758 20:12:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:54.758 20:12:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:54.758 20:12:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:54.758 20:12:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:54.758 20:12:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:54.758 20:12:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.758 20:12:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:54.758 20:12:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:54.758 20:12:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.758 20:12:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:54.758 "name": "raid_bdev1", 00:16:54.758 "uuid": "cb610f28-62b3-4dfd-9b31-cd63c0b2388d", 00:16:54.758 "strip_size_kb": 0, 00:16:54.758 "state": "online", 00:16:54.758 "raid_level": "raid1", 00:16:54.758 "superblock": true, 00:16:54.758 "num_base_bdevs": 2, 00:16:54.758 "num_base_bdevs_discovered": 1, 00:16:54.758 "num_base_bdevs_operational": 1, 00:16:54.758 "base_bdevs_list": [ 00:16:54.758 { 00:16:54.758 "name": null, 00:16:54.759 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:54.759 "is_configured": false, 00:16:54.759 "data_offset": 0, 00:16:54.759 "data_size": 7936 00:16:54.759 }, 00:16:54.759 { 00:16:54.759 "name": "BaseBdev2", 00:16:54.759 "uuid": "95af0ac7-fd07-5827-94c6-9f71d2261e9a", 00:16:54.759 "is_configured": true, 00:16:54.759 "data_offset": 256, 00:16:54.759 "data_size": 7936 00:16:54.759 } 00:16:54.759 ] 00:16:54.759 }' 00:16:54.759 20:12:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:55.019 20:12:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:55.019 20:12:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:55.019 20:12:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:55.019 20:12:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:16:55.019 20:12:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.019 20:12:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:55.019 20:12:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.019 20:12:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:55.019 20:12:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.019 20:12:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:55.019 [2024-12-08 20:12:26.791805] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:55.019 [2024-12-08 20:12:26.791859] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:55.019 [2024-12-08 20:12:26.791888] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:16:55.019 [2024-12-08 20:12:26.791906] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:55.019 [2024-12-08 20:12:26.792413] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:55.019 [2024-12-08 20:12:26.792437] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:55.019 [2024-12-08 20:12:26.792521] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:16:55.019 [2024-12-08 20:12:26.792537] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:55.019 [2024-12-08 20:12:26.792549] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:55.019 [2024-12-08 20:12:26.792559] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:16:55.019 BaseBdev1 00:16:55.019 20:12:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.019 20:12:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@775 -- # sleep 1 00:16:55.958 20:12:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:55.958 20:12:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:55.958 20:12:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:55.958 20:12:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:55.958 20:12:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:55.958 20:12:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:55.958 20:12:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:55.958 20:12:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:55.958 20:12:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:55.958 20:12:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:55.958 20:12:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:55.958 20:12:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:55.958 20:12:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.958 20:12:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:55.958 20:12:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.958 20:12:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:55.958 "name": "raid_bdev1", 00:16:55.958 "uuid": "cb610f28-62b3-4dfd-9b31-cd63c0b2388d", 00:16:55.958 "strip_size_kb": 0, 00:16:55.958 "state": "online", 00:16:55.958 "raid_level": "raid1", 00:16:55.958 "superblock": true, 00:16:55.958 "num_base_bdevs": 2, 00:16:55.958 "num_base_bdevs_discovered": 1, 00:16:55.958 "num_base_bdevs_operational": 1, 00:16:55.958 "base_bdevs_list": [ 00:16:55.958 { 00:16:55.958 "name": null, 00:16:55.958 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:55.958 "is_configured": false, 00:16:55.958 "data_offset": 0, 00:16:55.958 "data_size": 7936 00:16:55.958 }, 00:16:55.958 { 00:16:55.958 "name": "BaseBdev2", 00:16:55.958 "uuid": "95af0ac7-fd07-5827-94c6-9f71d2261e9a", 00:16:55.958 "is_configured": true, 00:16:55.958 "data_offset": 256, 00:16:55.958 "data_size": 7936 00:16:55.958 } 00:16:55.958 ] 00:16:55.958 }' 00:16:55.958 20:12:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:55.958 20:12:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:56.527 20:12:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:56.527 20:12:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:56.527 20:12:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:56.527 20:12:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:56.527 20:12:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:56.527 20:12:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:56.527 20:12:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:56.527 20:12:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.527 20:12:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:56.527 20:12:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.527 20:12:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:56.527 "name": "raid_bdev1", 00:16:56.527 "uuid": "cb610f28-62b3-4dfd-9b31-cd63c0b2388d", 00:16:56.527 "strip_size_kb": 0, 00:16:56.527 "state": "online", 00:16:56.527 "raid_level": "raid1", 00:16:56.527 "superblock": true, 00:16:56.527 "num_base_bdevs": 2, 00:16:56.527 "num_base_bdevs_discovered": 1, 00:16:56.527 "num_base_bdevs_operational": 1, 00:16:56.527 "base_bdevs_list": [ 00:16:56.527 { 00:16:56.527 "name": null, 00:16:56.527 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:56.527 "is_configured": false, 00:16:56.528 "data_offset": 0, 00:16:56.528 "data_size": 7936 00:16:56.528 }, 00:16:56.528 { 00:16:56.528 "name": "BaseBdev2", 00:16:56.528 "uuid": "95af0ac7-fd07-5827-94c6-9f71d2261e9a", 00:16:56.528 "is_configured": true, 00:16:56.528 "data_offset": 256, 00:16:56.528 "data_size": 7936 00:16:56.528 } 00:16:56.528 ] 00:16:56.528 }' 00:16:56.528 20:12:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:56.528 20:12:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:56.528 20:12:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:56.528 20:12:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:56.528 20:12:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:56.528 20:12:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@652 -- # local es=0 00:16:56.528 20:12:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:56.528 20:12:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:56.528 20:12:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:56.528 20:12:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:56.528 20:12:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:56.528 20:12:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:56.528 20:12:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.528 20:12:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:56.528 [2024-12-08 20:12:28.417048] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:56.528 [2024-12-08 20:12:28.417263] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:56.528 [2024-12-08 20:12:28.417344] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:56.528 request: 00:16:56.528 { 00:16:56.528 "base_bdev": "BaseBdev1", 00:16:56.528 "raid_bdev": "raid_bdev1", 00:16:56.528 "method": "bdev_raid_add_base_bdev", 00:16:56.528 "req_id": 1 00:16:56.528 } 00:16:56.528 Got JSON-RPC error response 00:16:56.528 response: 00:16:56.528 { 00:16:56.528 "code": -22, 00:16:56.528 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:16:56.528 } 00:16:56.528 20:12:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:56.528 20:12:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@655 -- # es=1 00:16:56.528 20:12:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:56.528 20:12:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:56.528 20:12:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:56.528 20:12:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@779 -- # sleep 1 00:16:57.469 20:12:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:57.469 20:12:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:57.469 20:12:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:57.469 20:12:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:57.469 20:12:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:57.469 20:12:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:57.469 20:12:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:57.469 20:12:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:57.469 20:12:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:57.469 20:12:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:57.469 20:12:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:57.469 20:12:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:57.469 20:12:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.469 20:12:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:57.729 20:12:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.729 20:12:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:57.729 "name": "raid_bdev1", 00:16:57.729 "uuid": "cb610f28-62b3-4dfd-9b31-cd63c0b2388d", 00:16:57.729 "strip_size_kb": 0, 00:16:57.729 "state": "online", 00:16:57.729 "raid_level": "raid1", 00:16:57.729 "superblock": true, 00:16:57.729 "num_base_bdevs": 2, 00:16:57.729 "num_base_bdevs_discovered": 1, 00:16:57.729 "num_base_bdevs_operational": 1, 00:16:57.729 "base_bdevs_list": [ 00:16:57.729 { 00:16:57.729 "name": null, 00:16:57.729 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:57.729 "is_configured": false, 00:16:57.729 "data_offset": 0, 00:16:57.729 "data_size": 7936 00:16:57.729 }, 00:16:57.729 { 00:16:57.729 "name": "BaseBdev2", 00:16:57.729 "uuid": "95af0ac7-fd07-5827-94c6-9f71d2261e9a", 00:16:57.729 "is_configured": true, 00:16:57.729 "data_offset": 256, 00:16:57.729 "data_size": 7936 00:16:57.729 } 00:16:57.729 ] 00:16:57.729 }' 00:16:57.729 20:12:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:57.729 20:12:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:57.989 20:12:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:57.989 20:12:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:57.989 20:12:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:57.989 20:12:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:57.989 20:12:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:57.989 20:12:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:57.989 20:12:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:57.989 20:12:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.989 20:12:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:57.989 20:12:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.250 20:12:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:58.250 "name": "raid_bdev1", 00:16:58.250 "uuid": "cb610f28-62b3-4dfd-9b31-cd63c0b2388d", 00:16:58.250 "strip_size_kb": 0, 00:16:58.250 "state": "online", 00:16:58.250 "raid_level": "raid1", 00:16:58.250 "superblock": true, 00:16:58.250 "num_base_bdevs": 2, 00:16:58.250 "num_base_bdevs_discovered": 1, 00:16:58.250 "num_base_bdevs_operational": 1, 00:16:58.250 "base_bdevs_list": [ 00:16:58.250 { 00:16:58.250 "name": null, 00:16:58.250 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:58.250 "is_configured": false, 00:16:58.250 "data_offset": 0, 00:16:58.250 "data_size": 7936 00:16:58.250 }, 00:16:58.250 { 00:16:58.250 "name": "BaseBdev2", 00:16:58.250 "uuid": "95af0ac7-fd07-5827-94c6-9f71d2261e9a", 00:16:58.250 "is_configured": true, 00:16:58.250 "data_offset": 256, 00:16:58.250 "data_size": 7936 00:16:58.250 } 00:16:58.250 ] 00:16:58.250 }' 00:16:58.250 20:12:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:58.250 20:12:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:58.250 20:12:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:58.250 20:12:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:58.250 20:12:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@784 -- # killprocess 86139 00:16:58.250 20:12:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@954 -- # '[' -z 86139 ']' 00:16:58.250 20:12:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@958 -- # kill -0 86139 00:16:58.250 20:12:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@959 -- # uname 00:16:58.250 20:12:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:58.250 20:12:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86139 00:16:58.250 killing process with pid 86139 00:16:58.250 Received shutdown signal, test time was about 60.000000 seconds 00:16:58.250 00:16:58.250 Latency(us) 00:16:58.250 [2024-12-08T20:12:30.228Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:58.250 [2024-12-08T20:12:30.228Z] =================================================================================================================== 00:16:58.250 [2024-12-08T20:12:30.228Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:58.250 20:12:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:58.250 20:12:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:58.250 20:12:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86139' 00:16:58.250 20:12:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@973 -- # kill 86139 00:16:58.250 [2024-12-08 20:12:30.105919] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:58.250 [2024-12-08 20:12:30.106047] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:58.250 [2024-12-08 20:12:30.106106] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:58.250 20:12:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@978 -- # wait 86139 00:16:58.250 [2024-12-08 20:12:30.106118] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:16:58.514 [2024-12-08 20:12:30.386692] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:59.491 20:12:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@786 -- # return 0 00:16:59.491 00:16:59.491 real 0m19.575s 00:16:59.491 user 0m25.700s 00:16:59.491 sys 0m2.422s 00:16:59.491 ************************************ 00:16:59.491 END TEST raid_rebuild_test_sb_4k 00:16:59.491 ************************************ 00:16:59.491 20:12:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:59.491 20:12:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:59.765 20:12:31 bdev_raid -- bdev/bdev_raid.sh@1003 -- # base_malloc_params='-m 32' 00:16:59.765 20:12:31 bdev_raid -- bdev/bdev_raid.sh@1004 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:16:59.765 20:12:31 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:16:59.765 20:12:31 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:59.765 20:12:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:59.765 ************************************ 00:16:59.765 START TEST raid_state_function_test_sb_md_separate 00:16:59.765 ************************************ 00:16:59.765 20:12:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:16:59.765 20:12:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:16:59.765 20:12:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:16:59.765 20:12:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:16:59.765 20:12:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:16:59.765 20:12:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:16:59.765 20:12:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:59.765 20:12:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:16:59.765 20:12:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:59.765 20:12:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:59.765 20:12:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:16:59.765 20:12:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:59.765 20:12:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:59.765 20:12:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:59.765 20:12:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:16:59.765 20:12:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:16:59.765 20:12:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # local strip_size 00:16:59.765 20:12:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:16:59.765 20:12:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:16:59.766 20:12:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:16:59.766 20:12:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:16:59.766 20:12:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:16:59.766 20:12:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:16:59.766 Process raid pid: 86825 00:16:59.766 20:12:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@229 -- # raid_pid=86825 00:16:59.766 20:12:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:59.766 20:12:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 86825' 00:16:59.766 20:12:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@231 -- # waitforlisten 86825 00:16:59.766 20:12:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@835 -- # '[' -z 86825 ']' 00:16:59.766 20:12:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:59.766 20:12:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:59.766 20:12:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:59.766 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:59.766 20:12:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:59.766 20:12:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:59.766 [2024-12-08 20:12:31.603059] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:16:59.766 [2024-12-08 20:12:31.603659] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:00.026 [2024-12-08 20:12:31.777082] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:00.026 [2024-12-08 20:12:31.878664] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:00.287 [2024-12-08 20:12:32.078796] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:00.287 [2024-12-08 20:12:32.078880] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:00.547 20:12:32 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:00.547 20:12:32 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@868 -- # return 0 00:17:00.547 20:12:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:00.547 20:12:32 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.547 20:12:32 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:00.547 [2024-12-08 20:12:32.419854] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:00.547 [2024-12-08 20:12:32.419909] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:00.547 [2024-12-08 20:12:32.419919] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:00.547 [2024-12-08 20:12:32.419928] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:00.547 20:12:32 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.548 20:12:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:00.548 20:12:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:00.548 20:12:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:00.548 20:12:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:00.548 20:12:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:00.548 20:12:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:00.548 20:12:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:00.548 20:12:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:00.548 20:12:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:00.548 20:12:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:00.548 20:12:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:00.548 20:12:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:00.548 20:12:32 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.548 20:12:32 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:00.548 20:12:32 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.548 20:12:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:00.548 "name": "Existed_Raid", 00:17:00.548 "uuid": "4db451ab-c17e-4b5a-8b57-e2d429dab1ac", 00:17:00.548 "strip_size_kb": 0, 00:17:00.548 "state": "configuring", 00:17:00.548 "raid_level": "raid1", 00:17:00.548 "superblock": true, 00:17:00.548 "num_base_bdevs": 2, 00:17:00.548 "num_base_bdevs_discovered": 0, 00:17:00.548 "num_base_bdevs_operational": 2, 00:17:00.548 "base_bdevs_list": [ 00:17:00.548 { 00:17:00.548 "name": "BaseBdev1", 00:17:00.548 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:00.548 "is_configured": false, 00:17:00.548 "data_offset": 0, 00:17:00.548 "data_size": 0 00:17:00.548 }, 00:17:00.548 { 00:17:00.548 "name": "BaseBdev2", 00:17:00.548 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:00.548 "is_configured": false, 00:17:00.548 "data_offset": 0, 00:17:00.548 "data_size": 0 00:17:00.548 } 00:17:00.548 ] 00:17:00.548 }' 00:17:00.548 20:12:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:00.548 20:12:32 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:01.118 20:12:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:01.118 20:12:32 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.118 20:12:32 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:01.118 [2024-12-08 20:12:32.847100] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:01.118 [2024-12-08 20:12:32.847174] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:17:01.118 20:12:32 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.118 20:12:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:01.118 20:12:32 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.118 20:12:32 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:01.118 [2024-12-08 20:12:32.855102] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:01.118 [2024-12-08 20:12:32.855188] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:01.118 [2024-12-08 20:12:32.855214] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:01.118 [2024-12-08 20:12:32.855238] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:01.118 20:12:32 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.118 20:12:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:17:01.118 20:12:32 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.118 20:12:32 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:01.118 [2024-12-08 20:12:32.898606] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:01.118 BaseBdev1 00:17:01.118 20:12:32 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.118 20:12:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:17:01.118 20:12:32 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:17:01.118 20:12:32 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:01.118 20:12:32 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # local i 00:17:01.118 20:12:32 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:01.118 20:12:32 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:01.118 20:12:32 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:01.118 20:12:32 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.118 20:12:32 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:01.118 20:12:32 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.118 20:12:32 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:01.118 20:12:32 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.118 20:12:32 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:01.118 [ 00:17:01.118 { 00:17:01.118 "name": "BaseBdev1", 00:17:01.118 "aliases": [ 00:17:01.118 "c89c966d-82eb-4c79-9197-46747197f200" 00:17:01.118 ], 00:17:01.118 "product_name": "Malloc disk", 00:17:01.118 "block_size": 4096, 00:17:01.118 "num_blocks": 8192, 00:17:01.118 "uuid": "c89c966d-82eb-4c79-9197-46747197f200", 00:17:01.118 "md_size": 32, 00:17:01.118 "md_interleave": false, 00:17:01.118 "dif_type": 0, 00:17:01.118 "assigned_rate_limits": { 00:17:01.118 "rw_ios_per_sec": 0, 00:17:01.118 "rw_mbytes_per_sec": 0, 00:17:01.118 "r_mbytes_per_sec": 0, 00:17:01.118 "w_mbytes_per_sec": 0 00:17:01.118 }, 00:17:01.118 "claimed": true, 00:17:01.118 "claim_type": "exclusive_write", 00:17:01.118 "zoned": false, 00:17:01.118 "supported_io_types": { 00:17:01.118 "read": true, 00:17:01.118 "write": true, 00:17:01.118 "unmap": true, 00:17:01.118 "flush": true, 00:17:01.118 "reset": true, 00:17:01.118 "nvme_admin": false, 00:17:01.118 "nvme_io": false, 00:17:01.118 "nvme_io_md": false, 00:17:01.118 "write_zeroes": true, 00:17:01.118 "zcopy": true, 00:17:01.118 "get_zone_info": false, 00:17:01.118 "zone_management": false, 00:17:01.118 "zone_append": false, 00:17:01.118 "compare": false, 00:17:01.118 "compare_and_write": false, 00:17:01.118 "abort": true, 00:17:01.118 "seek_hole": false, 00:17:01.118 "seek_data": false, 00:17:01.118 "copy": true, 00:17:01.118 "nvme_iov_md": false 00:17:01.118 }, 00:17:01.118 "memory_domains": [ 00:17:01.118 { 00:17:01.118 "dma_device_id": "system", 00:17:01.118 "dma_device_type": 1 00:17:01.118 }, 00:17:01.118 { 00:17:01.118 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:01.118 "dma_device_type": 2 00:17:01.118 } 00:17:01.118 ], 00:17:01.118 "driver_specific": {} 00:17:01.118 } 00:17:01.118 ] 00:17:01.118 20:12:32 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.118 20:12:32 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@911 -- # return 0 00:17:01.118 20:12:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:01.118 20:12:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:01.118 20:12:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:01.118 20:12:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:01.118 20:12:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:01.118 20:12:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:01.118 20:12:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:01.118 20:12:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:01.118 20:12:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:01.118 20:12:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:01.118 20:12:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:01.118 20:12:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:01.118 20:12:32 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.118 20:12:32 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:01.118 20:12:32 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.118 20:12:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:01.118 "name": "Existed_Raid", 00:17:01.118 "uuid": "9cce8a42-170b-4fc8-8cf7-4954eaa11d07", 00:17:01.118 "strip_size_kb": 0, 00:17:01.118 "state": "configuring", 00:17:01.119 "raid_level": "raid1", 00:17:01.119 "superblock": true, 00:17:01.119 "num_base_bdevs": 2, 00:17:01.119 "num_base_bdevs_discovered": 1, 00:17:01.119 "num_base_bdevs_operational": 2, 00:17:01.119 "base_bdevs_list": [ 00:17:01.119 { 00:17:01.119 "name": "BaseBdev1", 00:17:01.119 "uuid": "c89c966d-82eb-4c79-9197-46747197f200", 00:17:01.119 "is_configured": true, 00:17:01.119 "data_offset": 256, 00:17:01.119 "data_size": 7936 00:17:01.119 }, 00:17:01.119 { 00:17:01.119 "name": "BaseBdev2", 00:17:01.119 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:01.119 "is_configured": false, 00:17:01.119 "data_offset": 0, 00:17:01.119 "data_size": 0 00:17:01.119 } 00:17:01.119 ] 00:17:01.119 }' 00:17:01.119 20:12:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:01.119 20:12:32 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:01.379 20:12:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:01.379 20:12:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.379 20:12:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:01.379 [2024-12-08 20:12:33.353892] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:01.379 [2024-12-08 20:12:33.353942] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:17:01.639 20:12:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.639 20:12:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:01.639 20:12:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.639 20:12:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:01.639 [2024-12-08 20:12:33.365907] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:01.639 [2024-12-08 20:12:33.367714] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:01.639 [2024-12-08 20:12:33.367757] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:01.640 20:12:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.640 20:12:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:17:01.640 20:12:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:01.640 20:12:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:01.640 20:12:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:01.640 20:12:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:01.640 20:12:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:01.640 20:12:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:01.640 20:12:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:01.640 20:12:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:01.640 20:12:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:01.640 20:12:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:01.640 20:12:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:01.640 20:12:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:01.640 20:12:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.640 20:12:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:01.640 20:12:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:01.640 20:12:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.640 20:12:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:01.640 "name": "Existed_Raid", 00:17:01.640 "uuid": "4e684c52-6fde-44ff-b287-5e1651ccbe58", 00:17:01.640 "strip_size_kb": 0, 00:17:01.640 "state": "configuring", 00:17:01.640 "raid_level": "raid1", 00:17:01.640 "superblock": true, 00:17:01.640 "num_base_bdevs": 2, 00:17:01.640 "num_base_bdevs_discovered": 1, 00:17:01.640 "num_base_bdevs_operational": 2, 00:17:01.640 "base_bdevs_list": [ 00:17:01.640 { 00:17:01.640 "name": "BaseBdev1", 00:17:01.640 "uuid": "c89c966d-82eb-4c79-9197-46747197f200", 00:17:01.640 "is_configured": true, 00:17:01.640 "data_offset": 256, 00:17:01.640 "data_size": 7936 00:17:01.640 }, 00:17:01.640 { 00:17:01.640 "name": "BaseBdev2", 00:17:01.640 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:01.640 "is_configured": false, 00:17:01.640 "data_offset": 0, 00:17:01.640 "data_size": 0 00:17:01.640 } 00:17:01.640 ] 00:17:01.640 }' 00:17:01.640 20:12:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:01.640 20:12:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:01.901 20:12:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:17:01.901 20:12:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.901 20:12:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:01.901 [2024-12-08 20:12:33.843756] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:01.901 [2024-12-08 20:12:33.844102] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:01.901 [2024-12-08 20:12:33.844156] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:01.901 [2024-12-08 20:12:33.844288] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:17:01.901 [2024-12-08 20:12:33.844466] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:01.901 [2024-12-08 20:12:33.844529] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:17:01.901 [2024-12-08 20:12:33.844718] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:01.901 BaseBdev2 00:17:01.901 20:12:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.901 20:12:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:17:01.901 20:12:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:17:01.901 20:12:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:01.901 20:12:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # local i 00:17:01.901 20:12:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:01.901 20:12:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:01.901 20:12:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:01.901 20:12:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.901 20:12:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:01.901 20:12:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.901 20:12:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:01.901 20:12:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.901 20:12:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:01.901 [ 00:17:01.901 { 00:17:01.901 "name": "BaseBdev2", 00:17:01.901 "aliases": [ 00:17:01.901 "22881e98-f73e-4dc4-bfa6-cc4e4b077c17" 00:17:01.901 ], 00:17:01.901 "product_name": "Malloc disk", 00:17:01.901 "block_size": 4096, 00:17:01.901 "num_blocks": 8192, 00:17:01.901 "uuid": "22881e98-f73e-4dc4-bfa6-cc4e4b077c17", 00:17:01.901 "md_size": 32, 00:17:01.901 "md_interleave": false, 00:17:01.901 "dif_type": 0, 00:17:01.901 "assigned_rate_limits": { 00:17:01.901 "rw_ios_per_sec": 0, 00:17:01.901 "rw_mbytes_per_sec": 0, 00:17:01.901 "r_mbytes_per_sec": 0, 00:17:01.901 "w_mbytes_per_sec": 0 00:17:01.901 }, 00:17:01.901 "claimed": true, 00:17:01.901 "claim_type": "exclusive_write", 00:17:01.901 "zoned": false, 00:17:01.901 "supported_io_types": { 00:17:02.161 "read": true, 00:17:02.161 "write": true, 00:17:02.161 "unmap": true, 00:17:02.161 "flush": true, 00:17:02.161 "reset": true, 00:17:02.161 "nvme_admin": false, 00:17:02.161 "nvme_io": false, 00:17:02.161 "nvme_io_md": false, 00:17:02.161 "write_zeroes": true, 00:17:02.161 "zcopy": true, 00:17:02.161 "get_zone_info": false, 00:17:02.161 "zone_management": false, 00:17:02.161 "zone_append": false, 00:17:02.161 "compare": false, 00:17:02.161 "compare_and_write": false, 00:17:02.161 "abort": true, 00:17:02.161 "seek_hole": false, 00:17:02.161 "seek_data": false, 00:17:02.161 "copy": true, 00:17:02.161 "nvme_iov_md": false 00:17:02.161 }, 00:17:02.161 "memory_domains": [ 00:17:02.161 { 00:17:02.161 "dma_device_id": "system", 00:17:02.161 "dma_device_type": 1 00:17:02.161 }, 00:17:02.161 { 00:17:02.161 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:02.161 "dma_device_type": 2 00:17:02.161 } 00:17:02.161 ], 00:17:02.161 "driver_specific": {} 00:17:02.161 } 00:17:02.161 ] 00:17:02.161 20:12:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.161 20:12:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@911 -- # return 0 00:17:02.161 20:12:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:02.161 20:12:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:02.161 20:12:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:17:02.161 20:12:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:02.161 20:12:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:02.161 20:12:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:02.161 20:12:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:02.161 20:12:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:02.161 20:12:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:02.161 20:12:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:02.161 20:12:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:02.161 20:12:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:02.161 20:12:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:02.161 20:12:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:02.161 20:12:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.161 20:12:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:02.161 20:12:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.161 20:12:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:02.161 "name": "Existed_Raid", 00:17:02.161 "uuid": "4e684c52-6fde-44ff-b287-5e1651ccbe58", 00:17:02.161 "strip_size_kb": 0, 00:17:02.161 "state": "online", 00:17:02.161 "raid_level": "raid1", 00:17:02.161 "superblock": true, 00:17:02.161 "num_base_bdevs": 2, 00:17:02.161 "num_base_bdevs_discovered": 2, 00:17:02.161 "num_base_bdevs_operational": 2, 00:17:02.161 "base_bdevs_list": [ 00:17:02.161 { 00:17:02.161 "name": "BaseBdev1", 00:17:02.161 "uuid": "c89c966d-82eb-4c79-9197-46747197f200", 00:17:02.161 "is_configured": true, 00:17:02.161 "data_offset": 256, 00:17:02.161 "data_size": 7936 00:17:02.161 }, 00:17:02.161 { 00:17:02.161 "name": "BaseBdev2", 00:17:02.161 "uuid": "22881e98-f73e-4dc4-bfa6-cc4e4b077c17", 00:17:02.161 "is_configured": true, 00:17:02.161 "data_offset": 256, 00:17:02.161 "data_size": 7936 00:17:02.161 } 00:17:02.161 ] 00:17:02.161 }' 00:17:02.161 20:12:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:02.161 20:12:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:02.422 20:12:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:17:02.422 20:12:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:02.422 20:12:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:02.422 20:12:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:02.422 20:12:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:17:02.422 20:12:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:02.422 20:12:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:02.422 20:12:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.422 20:12:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:02.422 20:12:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:02.422 [2024-12-08 20:12:34.339271] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:02.422 20:12:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.422 20:12:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:02.422 "name": "Existed_Raid", 00:17:02.422 "aliases": [ 00:17:02.422 "4e684c52-6fde-44ff-b287-5e1651ccbe58" 00:17:02.422 ], 00:17:02.422 "product_name": "Raid Volume", 00:17:02.422 "block_size": 4096, 00:17:02.422 "num_blocks": 7936, 00:17:02.422 "uuid": "4e684c52-6fde-44ff-b287-5e1651ccbe58", 00:17:02.422 "md_size": 32, 00:17:02.422 "md_interleave": false, 00:17:02.422 "dif_type": 0, 00:17:02.422 "assigned_rate_limits": { 00:17:02.422 "rw_ios_per_sec": 0, 00:17:02.422 "rw_mbytes_per_sec": 0, 00:17:02.422 "r_mbytes_per_sec": 0, 00:17:02.422 "w_mbytes_per_sec": 0 00:17:02.422 }, 00:17:02.422 "claimed": false, 00:17:02.422 "zoned": false, 00:17:02.422 "supported_io_types": { 00:17:02.422 "read": true, 00:17:02.422 "write": true, 00:17:02.422 "unmap": false, 00:17:02.422 "flush": false, 00:17:02.422 "reset": true, 00:17:02.422 "nvme_admin": false, 00:17:02.422 "nvme_io": false, 00:17:02.422 "nvme_io_md": false, 00:17:02.422 "write_zeroes": true, 00:17:02.422 "zcopy": false, 00:17:02.422 "get_zone_info": false, 00:17:02.422 "zone_management": false, 00:17:02.422 "zone_append": false, 00:17:02.422 "compare": false, 00:17:02.422 "compare_and_write": false, 00:17:02.422 "abort": false, 00:17:02.422 "seek_hole": false, 00:17:02.422 "seek_data": false, 00:17:02.422 "copy": false, 00:17:02.422 "nvme_iov_md": false 00:17:02.422 }, 00:17:02.422 "memory_domains": [ 00:17:02.422 { 00:17:02.422 "dma_device_id": "system", 00:17:02.422 "dma_device_type": 1 00:17:02.422 }, 00:17:02.422 { 00:17:02.422 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:02.422 "dma_device_type": 2 00:17:02.422 }, 00:17:02.422 { 00:17:02.422 "dma_device_id": "system", 00:17:02.422 "dma_device_type": 1 00:17:02.422 }, 00:17:02.422 { 00:17:02.422 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:02.422 "dma_device_type": 2 00:17:02.422 } 00:17:02.422 ], 00:17:02.422 "driver_specific": { 00:17:02.422 "raid": { 00:17:02.422 "uuid": "4e684c52-6fde-44ff-b287-5e1651ccbe58", 00:17:02.422 "strip_size_kb": 0, 00:17:02.422 "state": "online", 00:17:02.422 "raid_level": "raid1", 00:17:02.422 "superblock": true, 00:17:02.422 "num_base_bdevs": 2, 00:17:02.422 "num_base_bdevs_discovered": 2, 00:17:02.422 "num_base_bdevs_operational": 2, 00:17:02.422 "base_bdevs_list": [ 00:17:02.422 { 00:17:02.422 "name": "BaseBdev1", 00:17:02.422 "uuid": "c89c966d-82eb-4c79-9197-46747197f200", 00:17:02.422 "is_configured": true, 00:17:02.422 "data_offset": 256, 00:17:02.422 "data_size": 7936 00:17:02.422 }, 00:17:02.422 { 00:17:02.422 "name": "BaseBdev2", 00:17:02.422 "uuid": "22881e98-f73e-4dc4-bfa6-cc4e4b077c17", 00:17:02.422 "is_configured": true, 00:17:02.422 "data_offset": 256, 00:17:02.422 "data_size": 7936 00:17:02.422 } 00:17:02.422 ] 00:17:02.422 } 00:17:02.422 } 00:17:02.422 }' 00:17:02.422 20:12:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:02.682 20:12:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:17:02.682 BaseBdev2' 00:17:02.682 20:12:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:02.682 20:12:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:17:02.682 20:12:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:02.682 20:12:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:17:02.683 20:12:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:02.683 20:12:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.683 20:12:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:02.683 20:12:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.683 20:12:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:17:02.683 20:12:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:17:02.683 20:12:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:02.683 20:12:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:02.683 20:12:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:02.683 20:12:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.683 20:12:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:02.683 20:12:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.683 20:12:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:17:02.683 20:12:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:17:02.683 20:12:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:02.683 20:12:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.683 20:12:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:02.683 [2024-12-08 20:12:34.570587] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:02.943 20:12:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.943 20:12:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # local expected_state 00:17:02.943 20:12:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:17:02.943 20:12:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:02.943 20:12:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:17:02.943 20:12:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:17:02.943 20:12:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:17:02.943 20:12:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:02.943 20:12:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:02.943 20:12:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:02.943 20:12:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:02.943 20:12:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:02.943 20:12:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:02.943 20:12:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:02.943 20:12:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:02.943 20:12:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:02.943 20:12:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:02.943 20:12:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:02.943 20:12:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.943 20:12:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:02.943 20:12:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.943 20:12:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:02.943 "name": "Existed_Raid", 00:17:02.943 "uuid": "4e684c52-6fde-44ff-b287-5e1651ccbe58", 00:17:02.943 "strip_size_kb": 0, 00:17:02.943 "state": "online", 00:17:02.943 "raid_level": "raid1", 00:17:02.943 "superblock": true, 00:17:02.943 "num_base_bdevs": 2, 00:17:02.943 "num_base_bdevs_discovered": 1, 00:17:02.943 "num_base_bdevs_operational": 1, 00:17:02.943 "base_bdevs_list": [ 00:17:02.943 { 00:17:02.943 "name": null, 00:17:02.943 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:02.943 "is_configured": false, 00:17:02.943 "data_offset": 0, 00:17:02.943 "data_size": 7936 00:17:02.943 }, 00:17:02.943 { 00:17:02.943 "name": "BaseBdev2", 00:17:02.943 "uuid": "22881e98-f73e-4dc4-bfa6-cc4e4b077c17", 00:17:02.943 "is_configured": true, 00:17:02.943 "data_offset": 256, 00:17:02.943 "data_size": 7936 00:17:02.943 } 00:17:02.943 ] 00:17:02.943 }' 00:17:02.943 20:12:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:02.943 20:12:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:03.209 20:12:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:17:03.209 20:12:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:03.209 20:12:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:03.209 20:12:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:03.209 20:12:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.209 20:12:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:03.209 20:12:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.209 20:12:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:03.209 20:12:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:03.209 20:12:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:17:03.209 20:12:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.209 20:12:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:03.209 [2024-12-08 20:12:35.147840] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:03.209 [2024-12-08 20:12:35.148012] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:03.468 [2024-12-08 20:12:35.244707] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:03.468 [2024-12-08 20:12:35.244756] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:03.468 [2024-12-08 20:12:35.244768] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:17:03.468 20:12:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.468 20:12:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:03.468 20:12:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:03.468 20:12:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:03.468 20:12:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.468 20:12:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:17:03.468 20:12:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:03.468 20:12:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.468 20:12:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:17:03.468 20:12:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:17:03.468 20:12:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:17:03.468 20:12:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@326 -- # killprocess 86825 00:17:03.468 20:12:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' -z 86825 ']' 00:17:03.468 20:12:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@958 -- # kill -0 86825 00:17:03.468 20:12:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # uname 00:17:03.468 20:12:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:03.468 20:12:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86825 00:17:03.469 20:12:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:03.469 killing process with pid 86825 00:17:03.469 20:12:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:03.469 20:12:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86825' 00:17:03.469 20:12:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@973 -- # kill 86825 00:17:03.469 [2024-12-08 20:12:35.342831] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:03.469 20:12:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@978 -- # wait 86825 00:17:03.469 [2024-12-08 20:12:35.359648] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:04.849 20:12:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@328 -- # return 0 00:17:04.850 00:17:04.850 real 0m4.904s 00:17:04.850 user 0m7.066s 00:17:04.850 sys 0m0.795s 00:17:04.850 20:12:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:04.850 20:12:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:04.850 ************************************ 00:17:04.850 END TEST raid_state_function_test_sb_md_separate 00:17:04.850 ************************************ 00:17:04.850 20:12:36 bdev_raid -- bdev/bdev_raid.sh@1005 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:17:04.850 20:12:36 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:17:04.850 20:12:36 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:04.850 20:12:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:04.850 ************************************ 00:17:04.850 START TEST raid_superblock_test_md_separate 00:17:04.850 ************************************ 00:17:04.850 20:12:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:17:04.850 20:12:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:17:04.850 20:12:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:17:04.850 20:12:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:17:04.850 20:12:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:17:04.850 20:12:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:17:04.850 20:12:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:17:04.850 20:12:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:17:04.850 20:12:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:17:04.850 20:12:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:17:04.850 20:12:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size 00:17:04.850 20:12:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:17:04.850 20:12:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:17:04.850 20:12:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:17:04.850 20:12:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:17:04.850 20:12:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:17:04.850 20:12:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # raid_pid=87077 00:17:04.850 20:12:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:17:04.850 20:12:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@413 -- # waitforlisten 87077 00:17:04.850 20:12:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@835 -- # '[' -z 87077 ']' 00:17:04.850 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:04.850 20:12:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:04.850 20:12:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:04.850 20:12:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:04.850 20:12:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:04.850 20:12:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:04.850 [2024-12-08 20:12:36.573467] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:17:04.850 [2024-12-08 20:12:36.573667] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87077 ] 00:17:04.850 [2024-12-08 20:12:36.747304] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:05.110 [2024-12-08 20:12:36.853281] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:05.110 [2024-12-08 20:12:37.041181] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:05.110 [2024-12-08 20:12:37.041310] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:05.679 20:12:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:05.679 20:12:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@868 -- # return 0 00:17:05.679 20:12:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:17:05.679 20:12:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:05.679 20:12:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:17:05.679 20:12:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:17:05.679 20:12:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:05.679 20:12:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:05.679 20:12:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:05.679 20:12:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:05.679 20:12:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc1 00:17:05.679 20:12:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.679 20:12:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:05.679 malloc1 00:17:05.679 20:12:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.679 20:12:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:05.679 20:12:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.679 20:12:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:05.679 [2024-12-08 20:12:37.432664] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:05.679 [2024-12-08 20:12:37.432756] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:05.679 [2024-12-08 20:12:37.432799] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:05.679 [2024-12-08 20:12:37.432808] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:05.679 [2024-12-08 20:12:37.434675] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:05.679 [2024-12-08 20:12:37.434709] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:05.679 pt1 00:17:05.679 20:12:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.679 20:12:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:05.679 20:12:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:05.680 20:12:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:17:05.680 20:12:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:17:05.680 20:12:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:05.680 20:12:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:05.680 20:12:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:05.680 20:12:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:05.680 20:12:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc2 00:17:05.680 20:12:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.680 20:12:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:05.680 malloc2 00:17:05.680 20:12:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.680 20:12:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:05.680 20:12:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.680 20:12:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:05.680 [2024-12-08 20:12:37.486062] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:05.680 [2024-12-08 20:12:37.486152] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:05.680 [2024-12-08 20:12:37.486206] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:05.680 [2024-12-08 20:12:37.486234] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:05.680 [2024-12-08 20:12:37.488115] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:05.680 [2024-12-08 20:12:37.488177] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:05.680 pt2 00:17:05.680 20:12:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.680 20:12:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:05.680 20:12:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:05.680 20:12:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:17:05.680 20:12:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.680 20:12:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:05.680 [2024-12-08 20:12:37.498088] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:05.680 [2024-12-08 20:12:37.499874] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:05.680 [2024-12-08 20:12:37.500117] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:05.680 [2024-12-08 20:12:37.500178] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:05.680 [2024-12-08 20:12:37.500304] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:17:05.680 [2024-12-08 20:12:37.500466] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:05.680 [2024-12-08 20:12:37.500508] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:05.680 [2024-12-08 20:12:37.500672] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:05.680 20:12:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.680 20:12:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:05.680 20:12:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:05.680 20:12:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:05.680 20:12:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:05.680 20:12:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:05.680 20:12:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:05.680 20:12:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:05.680 20:12:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:05.680 20:12:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:05.680 20:12:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:05.680 20:12:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:05.680 20:12:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:05.680 20:12:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.680 20:12:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:05.680 20:12:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.680 20:12:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:05.680 "name": "raid_bdev1", 00:17:05.680 "uuid": "74dc19f1-287c-4cf1-a014-af3fdf4cb5ac", 00:17:05.680 "strip_size_kb": 0, 00:17:05.680 "state": "online", 00:17:05.680 "raid_level": "raid1", 00:17:05.680 "superblock": true, 00:17:05.680 "num_base_bdevs": 2, 00:17:05.680 "num_base_bdevs_discovered": 2, 00:17:05.680 "num_base_bdevs_operational": 2, 00:17:05.680 "base_bdevs_list": [ 00:17:05.680 { 00:17:05.680 "name": "pt1", 00:17:05.680 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:05.680 "is_configured": true, 00:17:05.680 "data_offset": 256, 00:17:05.680 "data_size": 7936 00:17:05.680 }, 00:17:05.680 { 00:17:05.680 "name": "pt2", 00:17:05.680 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:05.680 "is_configured": true, 00:17:05.680 "data_offset": 256, 00:17:05.680 "data_size": 7936 00:17:05.680 } 00:17:05.680 ] 00:17:05.680 }' 00:17:05.680 20:12:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:05.680 20:12:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:06.248 20:12:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:17:06.248 20:12:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:06.248 20:12:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:06.248 20:12:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:06.248 20:12:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:17:06.248 20:12:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:06.248 20:12:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:06.248 20:12:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:06.248 20:12:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.248 20:12:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:06.248 [2024-12-08 20:12:37.985508] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:06.248 20:12:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.248 20:12:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:06.248 "name": "raid_bdev1", 00:17:06.248 "aliases": [ 00:17:06.248 "74dc19f1-287c-4cf1-a014-af3fdf4cb5ac" 00:17:06.248 ], 00:17:06.248 "product_name": "Raid Volume", 00:17:06.248 "block_size": 4096, 00:17:06.248 "num_blocks": 7936, 00:17:06.248 "uuid": "74dc19f1-287c-4cf1-a014-af3fdf4cb5ac", 00:17:06.248 "md_size": 32, 00:17:06.248 "md_interleave": false, 00:17:06.248 "dif_type": 0, 00:17:06.248 "assigned_rate_limits": { 00:17:06.248 "rw_ios_per_sec": 0, 00:17:06.248 "rw_mbytes_per_sec": 0, 00:17:06.248 "r_mbytes_per_sec": 0, 00:17:06.248 "w_mbytes_per_sec": 0 00:17:06.248 }, 00:17:06.248 "claimed": false, 00:17:06.248 "zoned": false, 00:17:06.248 "supported_io_types": { 00:17:06.248 "read": true, 00:17:06.248 "write": true, 00:17:06.248 "unmap": false, 00:17:06.248 "flush": false, 00:17:06.248 "reset": true, 00:17:06.248 "nvme_admin": false, 00:17:06.248 "nvme_io": false, 00:17:06.248 "nvme_io_md": false, 00:17:06.248 "write_zeroes": true, 00:17:06.248 "zcopy": false, 00:17:06.248 "get_zone_info": false, 00:17:06.248 "zone_management": false, 00:17:06.248 "zone_append": false, 00:17:06.248 "compare": false, 00:17:06.248 "compare_and_write": false, 00:17:06.248 "abort": false, 00:17:06.248 "seek_hole": false, 00:17:06.248 "seek_data": false, 00:17:06.248 "copy": false, 00:17:06.248 "nvme_iov_md": false 00:17:06.248 }, 00:17:06.248 "memory_domains": [ 00:17:06.248 { 00:17:06.248 "dma_device_id": "system", 00:17:06.248 "dma_device_type": 1 00:17:06.248 }, 00:17:06.248 { 00:17:06.248 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:06.248 "dma_device_type": 2 00:17:06.249 }, 00:17:06.249 { 00:17:06.249 "dma_device_id": "system", 00:17:06.249 "dma_device_type": 1 00:17:06.249 }, 00:17:06.249 { 00:17:06.249 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:06.249 "dma_device_type": 2 00:17:06.249 } 00:17:06.249 ], 00:17:06.249 "driver_specific": { 00:17:06.249 "raid": { 00:17:06.249 "uuid": "74dc19f1-287c-4cf1-a014-af3fdf4cb5ac", 00:17:06.249 "strip_size_kb": 0, 00:17:06.249 "state": "online", 00:17:06.249 "raid_level": "raid1", 00:17:06.249 "superblock": true, 00:17:06.249 "num_base_bdevs": 2, 00:17:06.249 "num_base_bdevs_discovered": 2, 00:17:06.249 "num_base_bdevs_operational": 2, 00:17:06.249 "base_bdevs_list": [ 00:17:06.249 { 00:17:06.249 "name": "pt1", 00:17:06.249 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:06.249 "is_configured": true, 00:17:06.249 "data_offset": 256, 00:17:06.249 "data_size": 7936 00:17:06.249 }, 00:17:06.249 { 00:17:06.249 "name": "pt2", 00:17:06.249 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:06.249 "is_configured": true, 00:17:06.249 "data_offset": 256, 00:17:06.249 "data_size": 7936 00:17:06.249 } 00:17:06.249 ] 00:17:06.249 } 00:17:06.249 } 00:17:06.249 }' 00:17:06.249 20:12:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:06.249 20:12:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:06.249 pt2' 00:17:06.249 20:12:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:06.249 20:12:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:17:06.249 20:12:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:06.249 20:12:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:06.249 20:12:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:06.249 20:12:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.249 20:12:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:06.249 20:12:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.249 20:12:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:17:06.249 20:12:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:17:06.249 20:12:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:06.249 20:12:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:06.249 20:12:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.249 20:12:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:06.249 20:12:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:06.249 20:12:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.249 20:12:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:17:06.249 20:12:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:17:06.249 20:12:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:06.249 20:12:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:17:06.249 20:12:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.249 20:12:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:06.249 [2024-12-08 20:12:38.209100] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:06.509 20:12:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.509 20:12:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=74dc19f1-287c-4cf1-a014-af3fdf4cb5ac 00:17:06.509 20:12:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@436 -- # '[' -z 74dc19f1-287c-4cf1-a014-af3fdf4cb5ac ']' 00:17:06.509 20:12:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:06.509 20:12:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.509 20:12:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:06.509 [2024-12-08 20:12:38.252766] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:06.509 [2024-12-08 20:12:38.252828] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:06.509 [2024-12-08 20:12:38.252908] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:06.509 [2024-12-08 20:12:38.252979] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:06.509 [2024-12-08 20:12:38.252991] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:06.509 20:12:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.509 20:12:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:06.509 20:12:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:17:06.509 20:12:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.509 20:12:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:06.509 20:12:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.509 20:12:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:17:06.509 20:12:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:17:06.509 20:12:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:06.509 20:12:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:17:06.510 20:12:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.510 20:12:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:06.510 20:12:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.510 20:12:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:06.510 20:12:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:17:06.510 20:12:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.510 20:12:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:06.510 20:12:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.510 20:12:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:06.510 20:12:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:17:06.510 20:12:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.510 20:12:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:06.510 20:12:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.510 20:12:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:17:06.510 20:12:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:06.510 20:12:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@652 -- # local es=0 00:17:06.510 20:12:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:06.510 20:12:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:06.510 20:12:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:06.510 20:12:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:06.510 20:12:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:06.510 20:12:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:06.510 20:12:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.510 20:12:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:06.510 [2024-12-08 20:12:38.392536] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:06.510 [2024-12-08 20:12:38.394353] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:06.510 [2024-12-08 20:12:38.394476] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:17:06.510 [2024-12-08 20:12:38.394594] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:17:06.510 [2024-12-08 20:12:38.394649] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:06.510 [2024-12-08 20:12:38.394687] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:17:06.510 request: 00:17:06.510 { 00:17:06.510 "name": "raid_bdev1", 00:17:06.510 "raid_level": "raid1", 00:17:06.510 "base_bdevs": [ 00:17:06.510 "malloc1", 00:17:06.510 "malloc2" 00:17:06.510 ], 00:17:06.510 "superblock": false, 00:17:06.510 "method": "bdev_raid_create", 00:17:06.510 "req_id": 1 00:17:06.510 } 00:17:06.510 Got JSON-RPC error response 00:17:06.510 response: 00:17:06.510 { 00:17:06.510 "code": -17, 00:17:06.510 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:06.510 } 00:17:06.510 20:12:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:06.510 20:12:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@655 -- # es=1 00:17:06.510 20:12:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:06.510 20:12:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:06.510 20:12:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:06.510 20:12:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:06.510 20:12:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:17:06.510 20:12:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.510 20:12:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:06.510 20:12:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.510 20:12:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:17:06.510 20:12:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:17:06.510 20:12:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:06.510 20:12:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.510 20:12:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:06.510 [2024-12-08 20:12:38.456410] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:06.510 [2024-12-08 20:12:38.456455] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:06.510 [2024-12-08 20:12:38.456470] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:06.510 [2024-12-08 20:12:38.456479] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:06.510 [2024-12-08 20:12:38.458409] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:06.510 [2024-12-08 20:12:38.458446] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:06.510 [2024-12-08 20:12:38.458488] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:06.510 [2024-12-08 20:12:38.458538] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:06.510 pt1 00:17:06.510 20:12:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.510 20:12:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:17:06.510 20:12:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:06.510 20:12:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:06.510 20:12:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:06.510 20:12:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:06.510 20:12:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:06.510 20:12:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:06.510 20:12:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:06.510 20:12:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:06.510 20:12:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:06.510 20:12:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:06.510 20:12:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:06.510 20:12:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.510 20:12:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:06.769 20:12:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.769 20:12:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:06.769 "name": "raid_bdev1", 00:17:06.769 "uuid": "74dc19f1-287c-4cf1-a014-af3fdf4cb5ac", 00:17:06.769 "strip_size_kb": 0, 00:17:06.769 "state": "configuring", 00:17:06.769 "raid_level": "raid1", 00:17:06.769 "superblock": true, 00:17:06.769 "num_base_bdevs": 2, 00:17:06.769 "num_base_bdevs_discovered": 1, 00:17:06.769 "num_base_bdevs_operational": 2, 00:17:06.769 "base_bdevs_list": [ 00:17:06.769 { 00:17:06.769 "name": "pt1", 00:17:06.769 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:06.769 "is_configured": true, 00:17:06.769 "data_offset": 256, 00:17:06.769 "data_size": 7936 00:17:06.769 }, 00:17:06.769 { 00:17:06.769 "name": null, 00:17:06.769 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:06.769 "is_configured": false, 00:17:06.769 "data_offset": 256, 00:17:06.769 "data_size": 7936 00:17:06.769 } 00:17:06.769 ] 00:17:06.769 }' 00:17:06.770 20:12:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:06.770 20:12:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:07.030 20:12:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:17:07.030 20:12:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:17:07.030 20:12:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:07.030 20:12:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:07.030 20:12:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.030 20:12:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:07.030 [2024-12-08 20:12:38.887701] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:07.030 [2024-12-08 20:12:38.887813] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:07.030 [2024-12-08 20:12:38.887853] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:17:07.030 [2024-12-08 20:12:38.887884] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:07.030 [2024-12-08 20:12:38.888190] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:07.030 [2024-12-08 20:12:38.888252] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:07.030 [2024-12-08 20:12:38.888350] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:07.030 [2024-12-08 20:12:38.888403] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:07.030 [2024-12-08 20:12:38.888566] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:07.030 [2024-12-08 20:12:38.888607] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:07.030 [2024-12-08 20:12:38.888727] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:07.030 [2024-12-08 20:12:38.888895] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:07.030 [2024-12-08 20:12:38.888933] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:17:07.030 [2024-12-08 20:12:38.889107] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:07.030 pt2 00:17:07.030 20:12:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.030 20:12:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:07.030 20:12:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:07.030 20:12:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:07.030 20:12:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:07.030 20:12:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:07.030 20:12:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:07.030 20:12:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:07.030 20:12:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:07.030 20:12:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:07.030 20:12:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:07.030 20:12:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:07.030 20:12:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:07.030 20:12:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:07.030 20:12:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:07.030 20:12:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.030 20:12:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:07.030 20:12:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.030 20:12:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:07.030 "name": "raid_bdev1", 00:17:07.030 "uuid": "74dc19f1-287c-4cf1-a014-af3fdf4cb5ac", 00:17:07.030 "strip_size_kb": 0, 00:17:07.030 "state": "online", 00:17:07.030 "raid_level": "raid1", 00:17:07.030 "superblock": true, 00:17:07.030 "num_base_bdevs": 2, 00:17:07.030 "num_base_bdevs_discovered": 2, 00:17:07.030 "num_base_bdevs_operational": 2, 00:17:07.030 "base_bdevs_list": [ 00:17:07.030 { 00:17:07.030 "name": "pt1", 00:17:07.030 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:07.030 "is_configured": true, 00:17:07.030 "data_offset": 256, 00:17:07.030 "data_size": 7936 00:17:07.030 }, 00:17:07.030 { 00:17:07.030 "name": "pt2", 00:17:07.030 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:07.030 "is_configured": true, 00:17:07.030 "data_offset": 256, 00:17:07.030 "data_size": 7936 00:17:07.030 } 00:17:07.030 ] 00:17:07.030 }' 00:17:07.030 20:12:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:07.030 20:12:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:07.599 20:12:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:17:07.599 20:12:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:07.599 20:12:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:07.599 20:12:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:07.599 20:12:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:17:07.599 20:12:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:07.599 20:12:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:07.599 20:12:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:07.599 20:12:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.599 20:12:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:07.599 [2024-12-08 20:12:39.323267] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:07.599 20:12:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.599 20:12:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:07.599 "name": "raid_bdev1", 00:17:07.599 "aliases": [ 00:17:07.599 "74dc19f1-287c-4cf1-a014-af3fdf4cb5ac" 00:17:07.599 ], 00:17:07.599 "product_name": "Raid Volume", 00:17:07.599 "block_size": 4096, 00:17:07.599 "num_blocks": 7936, 00:17:07.599 "uuid": "74dc19f1-287c-4cf1-a014-af3fdf4cb5ac", 00:17:07.599 "md_size": 32, 00:17:07.599 "md_interleave": false, 00:17:07.599 "dif_type": 0, 00:17:07.599 "assigned_rate_limits": { 00:17:07.599 "rw_ios_per_sec": 0, 00:17:07.599 "rw_mbytes_per_sec": 0, 00:17:07.599 "r_mbytes_per_sec": 0, 00:17:07.599 "w_mbytes_per_sec": 0 00:17:07.599 }, 00:17:07.599 "claimed": false, 00:17:07.599 "zoned": false, 00:17:07.599 "supported_io_types": { 00:17:07.599 "read": true, 00:17:07.599 "write": true, 00:17:07.599 "unmap": false, 00:17:07.599 "flush": false, 00:17:07.599 "reset": true, 00:17:07.599 "nvme_admin": false, 00:17:07.599 "nvme_io": false, 00:17:07.599 "nvme_io_md": false, 00:17:07.599 "write_zeroes": true, 00:17:07.599 "zcopy": false, 00:17:07.599 "get_zone_info": false, 00:17:07.599 "zone_management": false, 00:17:07.599 "zone_append": false, 00:17:07.599 "compare": false, 00:17:07.599 "compare_and_write": false, 00:17:07.599 "abort": false, 00:17:07.599 "seek_hole": false, 00:17:07.599 "seek_data": false, 00:17:07.599 "copy": false, 00:17:07.599 "nvme_iov_md": false 00:17:07.599 }, 00:17:07.599 "memory_domains": [ 00:17:07.599 { 00:17:07.599 "dma_device_id": "system", 00:17:07.599 "dma_device_type": 1 00:17:07.599 }, 00:17:07.599 { 00:17:07.599 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:07.599 "dma_device_type": 2 00:17:07.599 }, 00:17:07.599 { 00:17:07.599 "dma_device_id": "system", 00:17:07.599 "dma_device_type": 1 00:17:07.599 }, 00:17:07.599 { 00:17:07.599 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:07.599 "dma_device_type": 2 00:17:07.599 } 00:17:07.599 ], 00:17:07.599 "driver_specific": { 00:17:07.599 "raid": { 00:17:07.599 "uuid": "74dc19f1-287c-4cf1-a014-af3fdf4cb5ac", 00:17:07.599 "strip_size_kb": 0, 00:17:07.599 "state": "online", 00:17:07.599 "raid_level": "raid1", 00:17:07.599 "superblock": true, 00:17:07.599 "num_base_bdevs": 2, 00:17:07.599 "num_base_bdevs_discovered": 2, 00:17:07.599 "num_base_bdevs_operational": 2, 00:17:07.599 "base_bdevs_list": [ 00:17:07.599 { 00:17:07.599 "name": "pt1", 00:17:07.599 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:07.599 "is_configured": true, 00:17:07.599 "data_offset": 256, 00:17:07.599 "data_size": 7936 00:17:07.599 }, 00:17:07.599 { 00:17:07.599 "name": "pt2", 00:17:07.599 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:07.599 "is_configured": true, 00:17:07.599 "data_offset": 256, 00:17:07.599 "data_size": 7936 00:17:07.599 } 00:17:07.599 ] 00:17:07.599 } 00:17:07.599 } 00:17:07.599 }' 00:17:07.600 20:12:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:07.600 20:12:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:07.600 pt2' 00:17:07.600 20:12:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:07.600 20:12:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:17:07.600 20:12:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:07.600 20:12:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:07.600 20:12:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.600 20:12:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:07.600 20:12:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:07.600 20:12:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.600 20:12:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:17:07.600 20:12:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:17:07.600 20:12:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:07.600 20:12:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:07.600 20:12:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:07.600 20:12:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.600 20:12:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:07.600 20:12:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.600 20:12:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:17:07.600 20:12:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:17:07.600 20:12:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:17:07.600 20:12:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:07.600 20:12:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.600 20:12:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:07.600 [2024-12-08 20:12:39.538859] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:07.600 20:12:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.600 20:12:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # '[' 74dc19f1-287c-4cf1-a014-af3fdf4cb5ac '!=' 74dc19f1-287c-4cf1-a014-af3fdf4cb5ac ']' 00:17:07.600 20:12:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:17:07.600 20:12:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:07.600 20:12:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:17:07.600 20:12:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:17:07.600 20:12:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.600 20:12:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:07.600 [2024-12-08 20:12:39.570592] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:17:07.859 20:12:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.859 20:12:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:07.859 20:12:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:07.859 20:12:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:07.859 20:12:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:07.859 20:12:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:07.859 20:12:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:07.859 20:12:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:07.859 20:12:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:07.859 20:12:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:07.859 20:12:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:07.859 20:12:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:07.859 20:12:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:07.859 20:12:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.859 20:12:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:07.859 20:12:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.860 20:12:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:07.860 "name": "raid_bdev1", 00:17:07.860 "uuid": "74dc19f1-287c-4cf1-a014-af3fdf4cb5ac", 00:17:07.860 "strip_size_kb": 0, 00:17:07.860 "state": "online", 00:17:07.860 "raid_level": "raid1", 00:17:07.860 "superblock": true, 00:17:07.860 "num_base_bdevs": 2, 00:17:07.860 "num_base_bdevs_discovered": 1, 00:17:07.860 "num_base_bdevs_operational": 1, 00:17:07.860 "base_bdevs_list": [ 00:17:07.860 { 00:17:07.860 "name": null, 00:17:07.860 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:07.860 "is_configured": false, 00:17:07.860 "data_offset": 0, 00:17:07.860 "data_size": 7936 00:17:07.860 }, 00:17:07.860 { 00:17:07.860 "name": "pt2", 00:17:07.860 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:07.860 "is_configured": true, 00:17:07.860 "data_offset": 256, 00:17:07.860 "data_size": 7936 00:17:07.860 } 00:17:07.860 ] 00:17:07.860 }' 00:17:07.860 20:12:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:07.860 20:12:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:08.120 20:12:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:08.120 20:12:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.120 20:12:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:08.120 [2024-12-08 20:12:39.969888] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:08.120 [2024-12-08 20:12:39.969969] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:08.120 [2024-12-08 20:12:39.970067] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:08.120 [2024-12-08 20:12:39.970157] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:08.120 [2024-12-08 20:12:39.970224] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:17:08.120 20:12:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.120 20:12:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:08.120 20:12:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:17:08.120 20:12:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.120 20:12:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:08.120 20:12:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.120 20:12:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:17:08.120 20:12:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:17:08.120 20:12:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:17:08.120 20:12:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:08.120 20:12:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:17:08.120 20:12:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.120 20:12:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:08.120 20:12:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.120 20:12:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:17:08.120 20:12:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:08.120 20:12:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:17:08.120 20:12:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:17:08.120 20:12:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # i=1 00:17:08.120 20:12:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:08.120 20:12:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.120 20:12:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:08.120 [2024-12-08 20:12:40.041755] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:08.120 [2024-12-08 20:12:40.041807] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:08.120 [2024-12-08 20:12:40.041823] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:17:08.120 [2024-12-08 20:12:40.041834] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:08.120 [2024-12-08 20:12:40.043778] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:08.120 [2024-12-08 20:12:40.043853] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:08.120 [2024-12-08 20:12:40.043906] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:08.120 [2024-12-08 20:12:40.044000] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:08.120 [2024-12-08 20:12:40.044107] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:17:08.120 [2024-12-08 20:12:40.044119] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:08.120 [2024-12-08 20:12:40.044197] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:08.120 [2024-12-08 20:12:40.044320] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:17:08.120 [2024-12-08 20:12:40.044328] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:17:08.120 [2024-12-08 20:12:40.044418] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:08.120 pt2 00:17:08.120 20:12:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.120 20:12:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:08.120 20:12:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:08.120 20:12:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:08.120 20:12:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:08.120 20:12:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:08.120 20:12:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:08.120 20:12:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:08.120 20:12:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:08.120 20:12:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:08.120 20:12:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:08.120 20:12:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:08.120 20:12:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:08.121 20:12:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.121 20:12:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:08.121 20:12:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.379 20:12:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:08.379 "name": "raid_bdev1", 00:17:08.380 "uuid": "74dc19f1-287c-4cf1-a014-af3fdf4cb5ac", 00:17:08.380 "strip_size_kb": 0, 00:17:08.380 "state": "online", 00:17:08.380 "raid_level": "raid1", 00:17:08.380 "superblock": true, 00:17:08.380 "num_base_bdevs": 2, 00:17:08.380 "num_base_bdevs_discovered": 1, 00:17:08.380 "num_base_bdevs_operational": 1, 00:17:08.380 "base_bdevs_list": [ 00:17:08.380 { 00:17:08.380 "name": null, 00:17:08.380 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:08.380 "is_configured": false, 00:17:08.380 "data_offset": 256, 00:17:08.380 "data_size": 7936 00:17:08.380 }, 00:17:08.380 { 00:17:08.380 "name": "pt2", 00:17:08.380 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:08.380 "is_configured": true, 00:17:08.380 "data_offset": 256, 00:17:08.380 "data_size": 7936 00:17:08.380 } 00:17:08.380 ] 00:17:08.380 }' 00:17:08.380 20:12:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:08.380 20:12:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:08.639 20:12:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:08.639 20:12:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.639 20:12:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:08.639 [2024-12-08 20:12:40.476995] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:08.639 [2024-12-08 20:12:40.477070] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:08.639 [2024-12-08 20:12:40.477158] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:08.639 [2024-12-08 20:12:40.477272] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:08.639 [2024-12-08 20:12:40.477327] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:17:08.639 20:12:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.639 20:12:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:08.639 20:12:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:17:08.639 20:12:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.639 20:12:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:08.639 20:12:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.639 20:12:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:17:08.639 20:12:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:17:08.639 20:12:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:17:08.639 20:12:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:08.639 20:12:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.639 20:12:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:08.639 [2024-12-08 20:12:40.540898] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:08.639 [2024-12-08 20:12:40.541007] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:08.639 [2024-12-08 20:12:40.541065] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:17:08.639 [2024-12-08 20:12:40.541101] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:08.639 [2024-12-08 20:12:40.543174] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:08.639 [2024-12-08 20:12:40.543238] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:08.639 [2024-12-08 20:12:40.543311] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:08.639 [2024-12-08 20:12:40.543387] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:08.639 [2024-12-08 20:12:40.543597] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:17:08.639 [2024-12-08 20:12:40.543658] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:08.639 [2024-12-08 20:12:40.543712] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:17:08.639 [2024-12-08 20:12:40.543872] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:08.639 [2024-12-08 20:12:40.544016] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:17:08.639 [2024-12-08 20:12:40.544060] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:08.639 [2024-12-08 20:12:40.544174] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:08.639 [2024-12-08 20:12:40.544328] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:17:08.639 [2024-12-08 20:12:40.544373] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:17:08.639 [2024-12-08 20:12:40.544565] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:08.639 pt1 00:17:08.639 20:12:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.639 20:12:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:17:08.639 20:12:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:08.639 20:12:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:08.639 20:12:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:08.639 20:12:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:08.639 20:12:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:08.639 20:12:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:08.639 20:12:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:08.639 20:12:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:08.639 20:12:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:08.639 20:12:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:08.639 20:12:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:08.639 20:12:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.639 20:12:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:08.639 20:12:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:08.639 20:12:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.639 20:12:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:08.639 "name": "raid_bdev1", 00:17:08.639 "uuid": "74dc19f1-287c-4cf1-a014-af3fdf4cb5ac", 00:17:08.639 "strip_size_kb": 0, 00:17:08.639 "state": "online", 00:17:08.639 "raid_level": "raid1", 00:17:08.639 "superblock": true, 00:17:08.639 "num_base_bdevs": 2, 00:17:08.639 "num_base_bdevs_discovered": 1, 00:17:08.639 "num_base_bdevs_operational": 1, 00:17:08.639 "base_bdevs_list": [ 00:17:08.639 { 00:17:08.639 "name": null, 00:17:08.639 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:08.639 "is_configured": false, 00:17:08.639 "data_offset": 256, 00:17:08.639 "data_size": 7936 00:17:08.639 }, 00:17:08.639 { 00:17:08.639 "name": "pt2", 00:17:08.639 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:08.639 "is_configured": true, 00:17:08.639 "data_offset": 256, 00:17:08.639 "data_size": 7936 00:17:08.639 } 00:17:08.639 ] 00:17:08.639 }' 00:17:08.639 20:12:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:08.639 20:12:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:09.206 20:12:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:17:09.206 20:12:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:17:09.206 20:12:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.206 20:12:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:09.206 20:12:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.206 20:12:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:17:09.206 20:12:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:09.206 20:12:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.206 20:12:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:09.206 20:12:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:17:09.206 [2024-12-08 20:12:40.984457] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:09.206 20:12:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.206 20:12:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # '[' 74dc19f1-287c-4cf1-a014-af3fdf4cb5ac '!=' 74dc19f1-287c-4cf1-a014-af3fdf4cb5ac ']' 00:17:09.206 20:12:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@563 -- # killprocess 87077 00:17:09.206 20:12:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@954 -- # '[' -z 87077 ']' 00:17:09.206 20:12:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@958 -- # kill -0 87077 00:17:09.206 20:12:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # uname 00:17:09.206 20:12:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:09.206 20:12:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87077 00:17:09.206 20:12:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:09.206 killing process with pid 87077 00:17:09.206 20:12:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:09.206 20:12:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87077' 00:17:09.206 20:12:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@973 -- # kill 87077 00:17:09.206 [2024-12-08 20:12:41.073938] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:09.206 [2024-12-08 20:12:41.074033] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:09.206 [2024-12-08 20:12:41.074082] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:09.206 [2024-12-08 20:12:41.074099] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:17:09.206 20:12:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@978 -- # wait 87077 00:17:09.465 [2024-12-08 20:12:41.289807] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:10.405 ************************************ 00:17:10.405 END TEST raid_superblock_test_md_separate 00:17:10.405 ************************************ 00:17:10.405 20:12:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@565 -- # return 0 00:17:10.405 00:17:10.405 real 0m5.896s 00:17:10.405 user 0m8.941s 00:17:10.405 sys 0m1.004s 00:17:10.405 20:12:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:10.405 20:12:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:10.665 20:12:42 bdev_raid -- bdev/bdev_raid.sh@1006 -- # '[' true = true ']' 00:17:10.665 20:12:42 bdev_raid -- bdev/bdev_raid.sh@1007 -- # run_test raid_rebuild_test_sb_md_separate raid_rebuild_test raid1 2 true false true 00:17:10.665 20:12:42 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:17:10.665 20:12:42 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:10.665 20:12:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:10.665 ************************************ 00:17:10.665 START TEST raid_rebuild_test_sb_md_separate 00:17:10.665 ************************************ 00:17:10.665 20:12:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:17:10.666 20:12:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:17:10.666 20:12:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:17:10.666 20:12:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:17:10.666 20:12:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:17:10.666 20:12:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # local verify=true 00:17:10.666 20:12:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:10.666 20:12:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:10.666 20:12:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:10.666 20:12:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:10.666 20:12:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:10.666 20:12:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:10.666 20:12:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:10.666 20:12:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:10.666 20:12:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:10.666 20:12:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:10.666 20:12:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:10.666 20:12:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:10.666 20:12:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:10.666 20:12:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:10.666 20:12:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:10.666 20:12:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:17:10.666 20:12:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:17:10.666 20:12:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:17:10.666 20:12:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:17:10.666 20:12:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@597 -- # raid_pid=87402 00:17:10.666 20:12:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@598 -- # waitforlisten 87402 00:17:10.666 20:12:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:10.666 20:12:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@835 -- # '[' -z 87402 ']' 00:17:10.666 20:12:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:10.666 20:12:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:10.666 20:12:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:10.666 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:10.666 20:12:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:10.666 20:12:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:10.666 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:10.666 Zero copy mechanism will not be used. 00:17:10.666 [2024-12-08 20:12:42.537719] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:17:10.666 [2024-12-08 20:12:42.537910] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87402 ] 00:17:10.926 [2024-12-08 20:12:42.711367] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:10.926 [2024-12-08 20:12:42.819830] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:11.187 [2024-12-08 20:12:43.010872] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:11.187 [2024-12-08 20:12:43.011030] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:11.446 20:12:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:11.446 20:12:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # return 0 00:17:11.446 20:12:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:11.446 20:12:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1_malloc 00:17:11.446 20:12:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.446 20:12:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:11.446 BaseBdev1_malloc 00:17:11.446 20:12:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.446 20:12:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:11.446 20:12:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.446 20:12:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:11.446 [2024-12-08 20:12:43.406384] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:11.446 [2024-12-08 20:12:43.406440] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:11.446 [2024-12-08 20:12:43.406479] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:11.446 [2024-12-08 20:12:43.406490] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:11.446 [2024-12-08 20:12:43.408344] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:11.446 [2024-12-08 20:12:43.408382] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:11.446 BaseBdev1 00:17:11.446 20:12:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.446 20:12:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:11.446 20:12:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2_malloc 00:17:11.446 20:12:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.446 20:12:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:11.707 BaseBdev2_malloc 00:17:11.707 20:12:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.707 20:12:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:11.707 20:12:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.707 20:12:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:11.707 [2024-12-08 20:12:43.460995] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:11.707 [2024-12-08 20:12:43.461049] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:11.707 [2024-12-08 20:12:43.461084] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:11.707 [2024-12-08 20:12:43.461096] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:11.707 [2024-12-08 20:12:43.462852] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:11.707 [2024-12-08 20:12:43.462889] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:11.707 BaseBdev2 00:17:11.707 20:12:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.707 20:12:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b spare_malloc 00:17:11.707 20:12:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.707 20:12:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:11.707 spare_malloc 00:17:11.707 20:12:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.707 20:12:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:11.707 20:12:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.707 20:12:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:11.707 spare_delay 00:17:11.707 20:12:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.707 20:12:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:11.707 20:12:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.707 20:12:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:11.707 [2024-12-08 20:12:43.539259] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:11.707 [2024-12-08 20:12:43.539313] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:11.707 [2024-12-08 20:12:43.539348] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:17:11.707 [2024-12-08 20:12:43.539358] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:11.707 [2024-12-08 20:12:43.541235] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:11.707 [2024-12-08 20:12:43.541275] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:11.707 spare 00:17:11.707 20:12:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.707 20:12:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:17:11.707 20:12:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.707 20:12:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:11.707 [2024-12-08 20:12:43.551295] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:11.707 [2024-12-08 20:12:43.553215] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:11.707 [2024-12-08 20:12:43.553386] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:11.707 [2024-12-08 20:12:43.553401] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:11.707 [2024-12-08 20:12:43.553473] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:11.707 [2024-12-08 20:12:43.553591] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:11.707 [2024-12-08 20:12:43.553601] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:11.707 [2024-12-08 20:12:43.553687] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:11.707 20:12:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.707 20:12:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:11.707 20:12:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:11.707 20:12:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:11.707 20:12:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:11.707 20:12:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:11.707 20:12:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:11.707 20:12:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:11.707 20:12:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:11.707 20:12:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:11.707 20:12:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:11.707 20:12:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:11.708 20:12:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:11.708 20:12:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.708 20:12:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:11.708 20:12:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.708 20:12:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:11.708 "name": "raid_bdev1", 00:17:11.708 "uuid": "13b6c6ce-683b-4ee4-ab59-a3ac0426e573", 00:17:11.708 "strip_size_kb": 0, 00:17:11.708 "state": "online", 00:17:11.708 "raid_level": "raid1", 00:17:11.708 "superblock": true, 00:17:11.708 "num_base_bdevs": 2, 00:17:11.708 "num_base_bdevs_discovered": 2, 00:17:11.708 "num_base_bdevs_operational": 2, 00:17:11.708 "base_bdevs_list": [ 00:17:11.708 { 00:17:11.708 "name": "BaseBdev1", 00:17:11.708 "uuid": "c2b8a066-ea80-582f-bbe9-d2135ecb3ba3", 00:17:11.708 "is_configured": true, 00:17:11.708 "data_offset": 256, 00:17:11.708 "data_size": 7936 00:17:11.708 }, 00:17:11.708 { 00:17:11.708 "name": "BaseBdev2", 00:17:11.708 "uuid": "41a6d7b3-3e85-56ed-b559-d4967a7ba71e", 00:17:11.708 "is_configured": true, 00:17:11.708 "data_offset": 256, 00:17:11.708 "data_size": 7936 00:17:11.708 } 00:17:11.708 ] 00:17:11.708 }' 00:17:11.708 20:12:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:11.708 20:12:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:11.967 20:12:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:11.967 20:12:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:11.967 20:12:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.967 20:12:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:11.967 [2024-12-08 20:12:43.942902] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:12.228 20:12:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.228 20:12:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:17:12.228 20:12:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:12.228 20:12:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.228 20:12:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:12.228 20:12:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:12.228 20:12:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.228 20:12:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:17:12.228 20:12:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:17:12.228 20:12:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:17:12.228 20:12:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:17:12.228 20:12:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:17:12.228 20:12:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:12.228 20:12:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:17:12.228 20:12:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:12.228 20:12:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:12.228 20:12:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:12.228 20:12:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:17:12.228 20:12:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:12.228 20:12:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:12.228 20:12:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:17:12.228 [2024-12-08 20:12:44.198251] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:12.488 /dev/nbd0 00:17:12.488 20:12:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:12.488 20:12:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:12.488 20:12:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:12.488 20:12:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:17:12.488 20:12:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:12.488 20:12:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:12.488 20:12:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:12.488 20:12:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:17:12.488 20:12:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:12.488 20:12:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:12.488 20:12:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:12.488 1+0 records in 00:17:12.488 1+0 records out 00:17:12.488 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000294025 s, 13.9 MB/s 00:17:12.488 20:12:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:12.488 20:12:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:17:12.488 20:12:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:12.488 20:12:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:12.488 20:12:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:17:12.488 20:12:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:12.488 20:12:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:12.488 20:12:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:17:12.488 20:12:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:17:12.488 20:12:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:17:13.068 7936+0 records in 00:17:13.068 7936+0 records out 00:17:13.068 32505856 bytes (33 MB, 31 MiB) copied, 0.604517 s, 53.8 MB/s 00:17:13.068 20:12:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:17:13.068 20:12:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:13.068 20:12:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:13.068 20:12:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:13.068 20:12:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:17:13.068 20:12:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:13.068 20:12:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:13.332 20:12:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:13.332 [2024-12-08 20:12:45.080174] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:13.332 20:12:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:13.332 20:12:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:13.332 20:12:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:13.332 20:12:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:13.332 20:12:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:13.332 20:12:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:17:13.332 20:12:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:17:13.332 20:12:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:13.332 20:12:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.333 20:12:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:13.333 [2024-12-08 20:12:45.096260] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:13.333 20:12:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.333 20:12:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:13.333 20:12:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:13.333 20:12:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:13.333 20:12:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:13.333 20:12:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:13.333 20:12:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:13.333 20:12:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:13.333 20:12:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:13.333 20:12:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:13.333 20:12:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:13.333 20:12:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:13.333 20:12:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:13.333 20:12:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.333 20:12:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:13.333 20:12:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.333 20:12:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:13.333 "name": "raid_bdev1", 00:17:13.333 "uuid": "13b6c6ce-683b-4ee4-ab59-a3ac0426e573", 00:17:13.333 "strip_size_kb": 0, 00:17:13.333 "state": "online", 00:17:13.333 "raid_level": "raid1", 00:17:13.333 "superblock": true, 00:17:13.333 "num_base_bdevs": 2, 00:17:13.333 "num_base_bdevs_discovered": 1, 00:17:13.333 "num_base_bdevs_operational": 1, 00:17:13.333 "base_bdevs_list": [ 00:17:13.333 { 00:17:13.333 "name": null, 00:17:13.333 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:13.333 "is_configured": false, 00:17:13.333 "data_offset": 0, 00:17:13.333 "data_size": 7936 00:17:13.333 }, 00:17:13.333 { 00:17:13.333 "name": "BaseBdev2", 00:17:13.333 "uuid": "41a6d7b3-3e85-56ed-b559-d4967a7ba71e", 00:17:13.333 "is_configured": true, 00:17:13.333 "data_offset": 256, 00:17:13.333 "data_size": 7936 00:17:13.333 } 00:17:13.333 ] 00:17:13.333 }' 00:17:13.333 20:12:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:13.333 20:12:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:13.593 20:12:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:13.593 20:12:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.593 20:12:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:13.593 [2024-12-08 20:12:45.439648] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:13.593 [2024-12-08 20:12:45.453550] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:17:13.593 20:12:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.593 20:12:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:13.594 [2024-12-08 20:12:45.455427] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:14.532 20:12:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:14.532 20:12:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:14.532 20:12:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:14.532 20:12:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:14.532 20:12:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:14.532 20:12:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:14.532 20:12:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:14.532 20:12:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.532 20:12:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:14.532 20:12:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.793 20:12:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:14.793 "name": "raid_bdev1", 00:17:14.793 "uuid": "13b6c6ce-683b-4ee4-ab59-a3ac0426e573", 00:17:14.793 "strip_size_kb": 0, 00:17:14.793 "state": "online", 00:17:14.793 "raid_level": "raid1", 00:17:14.793 "superblock": true, 00:17:14.793 "num_base_bdevs": 2, 00:17:14.793 "num_base_bdevs_discovered": 2, 00:17:14.793 "num_base_bdevs_operational": 2, 00:17:14.793 "process": { 00:17:14.793 "type": "rebuild", 00:17:14.793 "target": "spare", 00:17:14.793 "progress": { 00:17:14.793 "blocks": 2560, 00:17:14.793 "percent": 32 00:17:14.793 } 00:17:14.793 }, 00:17:14.793 "base_bdevs_list": [ 00:17:14.793 { 00:17:14.793 "name": "spare", 00:17:14.793 "uuid": "c0790667-0fcc-5224-b5bd-1bc8163c3347", 00:17:14.794 "is_configured": true, 00:17:14.794 "data_offset": 256, 00:17:14.794 "data_size": 7936 00:17:14.794 }, 00:17:14.794 { 00:17:14.794 "name": "BaseBdev2", 00:17:14.794 "uuid": "41a6d7b3-3e85-56ed-b559-d4967a7ba71e", 00:17:14.794 "is_configured": true, 00:17:14.794 "data_offset": 256, 00:17:14.794 "data_size": 7936 00:17:14.794 } 00:17:14.794 ] 00:17:14.794 }' 00:17:14.794 20:12:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:14.794 20:12:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:14.794 20:12:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:14.794 20:12:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:14.794 20:12:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:14.794 20:12:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.794 20:12:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:14.794 [2024-12-08 20:12:46.603455] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:14.794 [2024-12-08 20:12:46.660484] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:14.794 [2024-12-08 20:12:46.660562] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:14.794 [2024-12-08 20:12:46.660577] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:14.794 [2024-12-08 20:12:46.660591] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:14.794 20:12:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.794 20:12:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:14.794 20:12:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:14.794 20:12:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:14.794 20:12:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:14.794 20:12:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:14.794 20:12:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:14.794 20:12:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:14.794 20:12:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:14.794 20:12:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:14.794 20:12:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:14.794 20:12:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:14.794 20:12:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.794 20:12:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:14.794 20:12:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:14.794 20:12:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.794 20:12:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:14.794 "name": "raid_bdev1", 00:17:14.794 "uuid": "13b6c6ce-683b-4ee4-ab59-a3ac0426e573", 00:17:14.794 "strip_size_kb": 0, 00:17:14.794 "state": "online", 00:17:14.794 "raid_level": "raid1", 00:17:14.794 "superblock": true, 00:17:14.794 "num_base_bdevs": 2, 00:17:14.794 "num_base_bdevs_discovered": 1, 00:17:14.794 "num_base_bdevs_operational": 1, 00:17:14.794 "base_bdevs_list": [ 00:17:14.794 { 00:17:14.794 "name": null, 00:17:14.794 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:14.794 "is_configured": false, 00:17:14.794 "data_offset": 0, 00:17:14.794 "data_size": 7936 00:17:14.794 }, 00:17:14.794 { 00:17:14.794 "name": "BaseBdev2", 00:17:14.794 "uuid": "41a6d7b3-3e85-56ed-b559-d4967a7ba71e", 00:17:14.794 "is_configured": true, 00:17:14.794 "data_offset": 256, 00:17:14.794 "data_size": 7936 00:17:14.794 } 00:17:14.794 ] 00:17:14.794 }' 00:17:14.794 20:12:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:14.794 20:12:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:15.363 20:12:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:15.363 20:12:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:15.363 20:12:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:15.363 20:12:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:15.363 20:12:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:15.363 20:12:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:15.363 20:12:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:15.363 20:12:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.363 20:12:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:15.363 20:12:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.363 20:12:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:15.363 "name": "raid_bdev1", 00:17:15.363 "uuid": "13b6c6ce-683b-4ee4-ab59-a3ac0426e573", 00:17:15.363 "strip_size_kb": 0, 00:17:15.363 "state": "online", 00:17:15.363 "raid_level": "raid1", 00:17:15.363 "superblock": true, 00:17:15.363 "num_base_bdevs": 2, 00:17:15.363 "num_base_bdevs_discovered": 1, 00:17:15.363 "num_base_bdevs_operational": 1, 00:17:15.363 "base_bdevs_list": [ 00:17:15.363 { 00:17:15.363 "name": null, 00:17:15.363 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:15.363 "is_configured": false, 00:17:15.363 "data_offset": 0, 00:17:15.363 "data_size": 7936 00:17:15.363 }, 00:17:15.363 { 00:17:15.363 "name": "BaseBdev2", 00:17:15.363 "uuid": "41a6d7b3-3e85-56ed-b559-d4967a7ba71e", 00:17:15.363 "is_configured": true, 00:17:15.363 "data_offset": 256, 00:17:15.363 "data_size": 7936 00:17:15.363 } 00:17:15.363 ] 00:17:15.363 }' 00:17:15.363 20:12:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:15.363 20:12:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:15.363 20:12:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:15.363 20:12:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:15.363 20:12:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:15.363 20:12:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.363 20:12:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:15.363 [2024-12-08 20:12:47.203639] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:15.363 [2024-12-08 20:12:47.217496] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:17:15.363 20:12:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.363 20:12:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:15.363 [2024-12-08 20:12:47.219345] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:16.303 20:12:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:16.303 20:12:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:16.303 20:12:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:16.303 20:12:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:16.303 20:12:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:16.303 20:12:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:16.303 20:12:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.303 20:12:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:16.303 20:12:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:16.303 20:12:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.303 20:12:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:16.303 "name": "raid_bdev1", 00:17:16.303 "uuid": "13b6c6ce-683b-4ee4-ab59-a3ac0426e573", 00:17:16.303 "strip_size_kb": 0, 00:17:16.303 "state": "online", 00:17:16.303 "raid_level": "raid1", 00:17:16.303 "superblock": true, 00:17:16.303 "num_base_bdevs": 2, 00:17:16.303 "num_base_bdevs_discovered": 2, 00:17:16.303 "num_base_bdevs_operational": 2, 00:17:16.303 "process": { 00:17:16.303 "type": "rebuild", 00:17:16.303 "target": "spare", 00:17:16.303 "progress": { 00:17:16.303 "blocks": 2560, 00:17:16.303 "percent": 32 00:17:16.303 } 00:17:16.303 }, 00:17:16.303 "base_bdevs_list": [ 00:17:16.303 { 00:17:16.303 "name": "spare", 00:17:16.303 "uuid": "c0790667-0fcc-5224-b5bd-1bc8163c3347", 00:17:16.303 "is_configured": true, 00:17:16.303 "data_offset": 256, 00:17:16.303 "data_size": 7936 00:17:16.303 }, 00:17:16.303 { 00:17:16.303 "name": "BaseBdev2", 00:17:16.303 "uuid": "41a6d7b3-3e85-56ed-b559-d4967a7ba71e", 00:17:16.303 "is_configured": true, 00:17:16.303 "data_offset": 256, 00:17:16.303 "data_size": 7936 00:17:16.303 } 00:17:16.303 ] 00:17:16.303 }' 00:17:16.303 20:12:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:16.563 20:12:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:16.563 20:12:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:16.563 20:12:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:16.563 20:12:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:17:16.563 20:12:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:17:16.563 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:17:16.563 20:12:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:17:16.563 20:12:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:17:16.563 20:12:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:17:16.563 20:12:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # local timeout=690 00:17:16.563 20:12:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:16.563 20:12:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:16.563 20:12:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:16.563 20:12:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:16.563 20:12:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:16.563 20:12:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:16.563 20:12:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:16.563 20:12:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.563 20:12:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:16.563 20:12:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:16.563 20:12:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.563 20:12:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:16.563 "name": "raid_bdev1", 00:17:16.563 "uuid": "13b6c6ce-683b-4ee4-ab59-a3ac0426e573", 00:17:16.563 "strip_size_kb": 0, 00:17:16.563 "state": "online", 00:17:16.563 "raid_level": "raid1", 00:17:16.563 "superblock": true, 00:17:16.563 "num_base_bdevs": 2, 00:17:16.563 "num_base_bdevs_discovered": 2, 00:17:16.563 "num_base_bdevs_operational": 2, 00:17:16.563 "process": { 00:17:16.563 "type": "rebuild", 00:17:16.564 "target": "spare", 00:17:16.564 "progress": { 00:17:16.564 "blocks": 2816, 00:17:16.564 "percent": 35 00:17:16.564 } 00:17:16.564 }, 00:17:16.564 "base_bdevs_list": [ 00:17:16.564 { 00:17:16.564 "name": "spare", 00:17:16.564 "uuid": "c0790667-0fcc-5224-b5bd-1bc8163c3347", 00:17:16.564 "is_configured": true, 00:17:16.564 "data_offset": 256, 00:17:16.564 "data_size": 7936 00:17:16.564 }, 00:17:16.564 { 00:17:16.564 "name": "BaseBdev2", 00:17:16.564 "uuid": "41a6d7b3-3e85-56ed-b559-d4967a7ba71e", 00:17:16.564 "is_configured": true, 00:17:16.564 "data_offset": 256, 00:17:16.564 "data_size": 7936 00:17:16.564 } 00:17:16.564 ] 00:17:16.564 }' 00:17:16.564 20:12:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:16.564 20:12:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:16.564 20:12:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:16.564 20:12:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:16.564 20:12:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:17.505 20:12:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:17.505 20:12:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:17.505 20:12:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:17.505 20:12:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:17.505 20:12:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:17.505 20:12:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:17.765 20:12:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:17.765 20:12:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:17.765 20:12:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.765 20:12:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:17.765 20:12:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.765 20:12:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:17.765 "name": "raid_bdev1", 00:17:17.765 "uuid": "13b6c6ce-683b-4ee4-ab59-a3ac0426e573", 00:17:17.765 "strip_size_kb": 0, 00:17:17.765 "state": "online", 00:17:17.765 "raid_level": "raid1", 00:17:17.765 "superblock": true, 00:17:17.765 "num_base_bdevs": 2, 00:17:17.765 "num_base_bdevs_discovered": 2, 00:17:17.765 "num_base_bdevs_operational": 2, 00:17:17.765 "process": { 00:17:17.765 "type": "rebuild", 00:17:17.765 "target": "spare", 00:17:17.765 "progress": { 00:17:17.765 "blocks": 5632, 00:17:17.765 "percent": 70 00:17:17.765 } 00:17:17.765 }, 00:17:17.765 "base_bdevs_list": [ 00:17:17.765 { 00:17:17.765 "name": "spare", 00:17:17.765 "uuid": "c0790667-0fcc-5224-b5bd-1bc8163c3347", 00:17:17.765 "is_configured": true, 00:17:17.765 "data_offset": 256, 00:17:17.765 "data_size": 7936 00:17:17.765 }, 00:17:17.765 { 00:17:17.765 "name": "BaseBdev2", 00:17:17.765 "uuid": "41a6d7b3-3e85-56ed-b559-d4967a7ba71e", 00:17:17.765 "is_configured": true, 00:17:17.765 "data_offset": 256, 00:17:17.765 "data_size": 7936 00:17:17.765 } 00:17:17.765 ] 00:17:17.765 }' 00:17:17.765 20:12:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:17.765 20:12:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:17.765 20:12:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:17.765 20:12:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:17.765 20:12:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:18.705 [2024-12-08 20:12:50.331593] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:18.705 [2024-12-08 20:12:50.331729] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:18.705 [2024-12-08 20:12:50.331873] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:18.705 20:12:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:18.705 20:12:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:18.705 20:12:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:18.705 20:12:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:18.705 20:12:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:18.705 20:12:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:18.705 20:12:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:18.705 20:12:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:18.705 20:12:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.705 20:12:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:18.705 20:12:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.705 20:12:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:18.705 "name": "raid_bdev1", 00:17:18.705 "uuid": "13b6c6ce-683b-4ee4-ab59-a3ac0426e573", 00:17:18.705 "strip_size_kb": 0, 00:17:18.705 "state": "online", 00:17:18.705 "raid_level": "raid1", 00:17:18.705 "superblock": true, 00:17:18.705 "num_base_bdevs": 2, 00:17:18.705 "num_base_bdevs_discovered": 2, 00:17:18.705 "num_base_bdevs_operational": 2, 00:17:18.705 "base_bdevs_list": [ 00:17:18.705 { 00:17:18.705 "name": "spare", 00:17:18.705 "uuid": "c0790667-0fcc-5224-b5bd-1bc8163c3347", 00:17:18.705 "is_configured": true, 00:17:18.705 "data_offset": 256, 00:17:18.705 "data_size": 7936 00:17:18.705 }, 00:17:18.705 { 00:17:18.705 "name": "BaseBdev2", 00:17:18.705 "uuid": "41a6d7b3-3e85-56ed-b559-d4967a7ba71e", 00:17:18.705 "is_configured": true, 00:17:18.705 "data_offset": 256, 00:17:18.705 "data_size": 7936 00:17:18.705 } 00:17:18.705 ] 00:17:18.705 }' 00:17:18.705 20:12:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:18.965 20:12:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:18.965 20:12:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:18.965 20:12:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:18.965 20:12:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@709 -- # break 00:17:18.965 20:12:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:18.965 20:12:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:18.965 20:12:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:18.965 20:12:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:18.965 20:12:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:18.965 20:12:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:18.965 20:12:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.965 20:12:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:18.965 20:12:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:18.965 20:12:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.965 20:12:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:18.965 "name": "raid_bdev1", 00:17:18.965 "uuid": "13b6c6ce-683b-4ee4-ab59-a3ac0426e573", 00:17:18.965 "strip_size_kb": 0, 00:17:18.965 "state": "online", 00:17:18.965 "raid_level": "raid1", 00:17:18.965 "superblock": true, 00:17:18.965 "num_base_bdevs": 2, 00:17:18.965 "num_base_bdevs_discovered": 2, 00:17:18.965 "num_base_bdevs_operational": 2, 00:17:18.965 "base_bdevs_list": [ 00:17:18.965 { 00:17:18.965 "name": "spare", 00:17:18.965 "uuid": "c0790667-0fcc-5224-b5bd-1bc8163c3347", 00:17:18.965 "is_configured": true, 00:17:18.965 "data_offset": 256, 00:17:18.965 "data_size": 7936 00:17:18.965 }, 00:17:18.965 { 00:17:18.965 "name": "BaseBdev2", 00:17:18.965 "uuid": "41a6d7b3-3e85-56ed-b559-d4967a7ba71e", 00:17:18.965 "is_configured": true, 00:17:18.965 "data_offset": 256, 00:17:18.965 "data_size": 7936 00:17:18.965 } 00:17:18.965 ] 00:17:18.965 }' 00:17:18.965 20:12:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:18.965 20:12:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:18.965 20:12:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:18.965 20:12:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:18.965 20:12:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:18.965 20:12:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:18.965 20:12:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:18.965 20:12:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:18.965 20:12:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:18.965 20:12:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:18.965 20:12:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:18.965 20:12:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:18.965 20:12:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:18.965 20:12:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:18.965 20:12:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:18.965 20:12:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.965 20:12:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:18.965 20:12:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:18.965 20:12:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.965 20:12:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:18.965 "name": "raid_bdev1", 00:17:18.965 "uuid": "13b6c6ce-683b-4ee4-ab59-a3ac0426e573", 00:17:18.965 "strip_size_kb": 0, 00:17:18.965 "state": "online", 00:17:18.965 "raid_level": "raid1", 00:17:18.965 "superblock": true, 00:17:18.965 "num_base_bdevs": 2, 00:17:18.965 "num_base_bdevs_discovered": 2, 00:17:18.965 "num_base_bdevs_operational": 2, 00:17:18.965 "base_bdevs_list": [ 00:17:18.965 { 00:17:18.965 "name": "spare", 00:17:18.965 "uuid": "c0790667-0fcc-5224-b5bd-1bc8163c3347", 00:17:18.965 "is_configured": true, 00:17:18.965 "data_offset": 256, 00:17:18.965 "data_size": 7936 00:17:18.965 }, 00:17:18.965 { 00:17:18.965 "name": "BaseBdev2", 00:17:18.965 "uuid": "41a6d7b3-3e85-56ed-b559-d4967a7ba71e", 00:17:18.965 "is_configured": true, 00:17:18.965 "data_offset": 256, 00:17:18.965 "data_size": 7936 00:17:18.965 } 00:17:18.965 ] 00:17:18.965 }' 00:17:18.965 20:12:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:18.965 20:12:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:19.535 20:12:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:19.535 20:12:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.535 20:12:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:19.535 [2024-12-08 20:12:51.305770] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:19.535 [2024-12-08 20:12:51.305838] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:19.535 [2024-12-08 20:12:51.305941] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:19.535 [2024-12-08 20:12:51.306021] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:19.535 [2024-12-08 20:12:51.306031] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:19.535 20:12:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.535 20:12:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:19.535 20:12:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # jq length 00:17:19.535 20:12:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.535 20:12:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:19.535 20:12:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.535 20:12:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:19.535 20:12:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:19.535 20:12:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:17:19.535 20:12:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:17:19.535 20:12:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:19.535 20:12:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:17:19.535 20:12:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:19.535 20:12:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:19.535 20:12:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:19.535 20:12:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:17:19.535 20:12:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:19.535 20:12:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:19.535 20:12:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:17:19.796 /dev/nbd0 00:17:19.796 20:12:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:19.796 20:12:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:19.796 20:12:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:19.796 20:12:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:17:19.796 20:12:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:19.796 20:12:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:19.796 20:12:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:19.796 20:12:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:17:19.796 20:12:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:19.796 20:12:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:19.796 20:12:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:19.796 1+0 records in 00:17:19.796 1+0 records out 00:17:19.796 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000192368 s, 21.3 MB/s 00:17:19.796 20:12:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:19.796 20:12:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:17:19.796 20:12:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:19.796 20:12:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:19.796 20:12:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:17:19.796 20:12:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:19.796 20:12:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:19.796 20:12:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:17:20.056 /dev/nbd1 00:17:20.056 20:12:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:20.056 20:12:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:20.056 20:12:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:17:20.056 20:12:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:17:20.056 20:12:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:20.056 20:12:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:20.057 20:12:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:17:20.057 20:12:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:17:20.057 20:12:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:20.057 20:12:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:20.057 20:12:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:20.057 1+0 records in 00:17:20.057 1+0 records out 00:17:20.057 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000450353 s, 9.1 MB/s 00:17:20.057 20:12:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:20.057 20:12:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:17:20.057 20:12:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:20.057 20:12:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:20.057 20:12:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:17:20.057 20:12:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:20.057 20:12:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:20.057 20:12:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:17:20.057 20:12:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:17:20.057 20:12:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:20.057 20:12:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:20.057 20:12:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:20.057 20:12:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:17:20.057 20:12:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:20.057 20:12:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:20.317 20:12:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:20.317 20:12:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:20.317 20:12:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:20.317 20:12:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:20.317 20:12:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:20.317 20:12:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:20.317 20:12:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:17:20.317 20:12:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:17:20.317 20:12:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:20.317 20:12:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:20.577 20:12:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:20.577 20:12:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:20.577 20:12:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:20.577 20:12:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:20.577 20:12:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:20.577 20:12:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:20.577 20:12:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:17:20.577 20:12:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:17:20.577 20:12:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:17:20.578 20:12:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:17:20.578 20:12:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.578 20:12:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:20.578 20:12:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.578 20:12:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:20.578 20:12:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.578 20:12:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:20.578 [2024-12-08 20:12:52.475149] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:20.578 [2024-12-08 20:12:52.475236] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:20.578 [2024-12-08 20:12:52.475274] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:17:20.578 [2024-12-08 20:12:52.475302] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:20.578 [2024-12-08 20:12:52.477310] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:20.578 [2024-12-08 20:12:52.477376] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:20.578 [2024-12-08 20:12:52.477462] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:20.578 [2024-12-08 20:12:52.477530] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:20.578 [2024-12-08 20:12:52.477728] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:20.578 spare 00:17:20.578 20:12:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.578 20:12:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:17:20.578 20:12:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.578 20:12:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:20.837 [2024-12-08 20:12:52.577666] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:17:20.837 [2024-12-08 20:12:52.577694] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:20.837 [2024-12-08 20:12:52.577801] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:17:20.838 [2024-12-08 20:12:52.577976] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:17:20.838 [2024-12-08 20:12:52.577988] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:17:20.838 [2024-12-08 20:12:52.578137] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:20.838 20:12:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.838 20:12:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:20.838 20:12:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:20.838 20:12:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:20.838 20:12:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:20.838 20:12:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:20.838 20:12:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:20.838 20:12:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:20.838 20:12:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:20.838 20:12:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:20.838 20:12:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:20.838 20:12:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:20.838 20:12:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:20.838 20:12:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.838 20:12:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:20.838 20:12:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.838 20:12:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:20.838 "name": "raid_bdev1", 00:17:20.838 "uuid": "13b6c6ce-683b-4ee4-ab59-a3ac0426e573", 00:17:20.838 "strip_size_kb": 0, 00:17:20.838 "state": "online", 00:17:20.838 "raid_level": "raid1", 00:17:20.838 "superblock": true, 00:17:20.838 "num_base_bdevs": 2, 00:17:20.838 "num_base_bdevs_discovered": 2, 00:17:20.838 "num_base_bdevs_operational": 2, 00:17:20.838 "base_bdevs_list": [ 00:17:20.838 { 00:17:20.838 "name": "spare", 00:17:20.838 "uuid": "c0790667-0fcc-5224-b5bd-1bc8163c3347", 00:17:20.838 "is_configured": true, 00:17:20.838 "data_offset": 256, 00:17:20.838 "data_size": 7936 00:17:20.838 }, 00:17:20.838 { 00:17:20.838 "name": "BaseBdev2", 00:17:20.838 "uuid": "41a6d7b3-3e85-56ed-b559-d4967a7ba71e", 00:17:20.838 "is_configured": true, 00:17:20.838 "data_offset": 256, 00:17:20.838 "data_size": 7936 00:17:20.838 } 00:17:20.838 ] 00:17:20.838 }' 00:17:20.838 20:12:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:20.838 20:12:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:21.098 20:12:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:21.098 20:12:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:21.098 20:12:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:21.098 20:12:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:21.098 20:12:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:21.098 20:12:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:21.098 20:12:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:21.098 20:12:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.098 20:12:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:21.098 20:12:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.098 20:12:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:21.098 "name": "raid_bdev1", 00:17:21.098 "uuid": "13b6c6ce-683b-4ee4-ab59-a3ac0426e573", 00:17:21.098 "strip_size_kb": 0, 00:17:21.098 "state": "online", 00:17:21.098 "raid_level": "raid1", 00:17:21.098 "superblock": true, 00:17:21.098 "num_base_bdevs": 2, 00:17:21.098 "num_base_bdevs_discovered": 2, 00:17:21.098 "num_base_bdevs_operational": 2, 00:17:21.098 "base_bdevs_list": [ 00:17:21.098 { 00:17:21.098 "name": "spare", 00:17:21.098 "uuid": "c0790667-0fcc-5224-b5bd-1bc8163c3347", 00:17:21.098 "is_configured": true, 00:17:21.098 "data_offset": 256, 00:17:21.098 "data_size": 7936 00:17:21.098 }, 00:17:21.098 { 00:17:21.098 "name": "BaseBdev2", 00:17:21.098 "uuid": "41a6d7b3-3e85-56ed-b559-d4967a7ba71e", 00:17:21.098 "is_configured": true, 00:17:21.098 "data_offset": 256, 00:17:21.098 "data_size": 7936 00:17:21.098 } 00:17:21.098 ] 00:17:21.098 }' 00:17:21.098 20:12:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:21.098 20:12:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:21.098 20:12:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:21.358 20:12:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:21.358 20:12:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:17:21.358 20:12:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:21.358 20:12:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.358 20:12:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:21.358 20:12:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.358 20:12:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:17:21.358 20:12:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:21.358 20:12:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.358 20:12:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:21.358 [2024-12-08 20:12:53.158060] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:21.358 20:12:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.358 20:12:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:21.358 20:12:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:21.358 20:12:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:21.358 20:12:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:21.358 20:12:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:21.359 20:12:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:21.359 20:12:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:21.359 20:12:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:21.359 20:12:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:21.359 20:12:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:21.359 20:12:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:21.359 20:12:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.359 20:12:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:21.359 20:12:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:21.359 20:12:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.359 20:12:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:21.359 "name": "raid_bdev1", 00:17:21.359 "uuid": "13b6c6ce-683b-4ee4-ab59-a3ac0426e573", 00:17:21.359 "strip_size_kb": 0, 00:17:21.359 "state": "online", 00:17:21.359 "raid_level": "raid1", 00:17:21.359 "superblock": true, 00:17:21.359 "num_base_bdevs": 2, 00:17:21.359 "num_base_bdevs_discovered": 1, 00:17:21.359 "num_base_bdevs_operational": 1, 00:17:21.359 "base_bdevs_list": [ 00:17:21.359 { 00:17:21.359 "name": null, 00:17:21.359 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:21.359 "is_configured": false, 00:17:21.359 "data_offset": 0, 00:17:21.359 "data_size": 7936 00:17:21.359 }, 00:17:21.359 { 00:17:21.359 "name": "BaseBdev2", 00:17:21.359 "uuid": "41a6d7b3-3e85-56ed-b559-d4967a7ba71e", 00:17:21.359 "is_configured": true, 00:17:21.359 "data_offset": 256, 00:17:21.359 "data_size": 7936 00:17:21.359 } 00:17:21.359 ] 00:17:21.359 }' 00:17:21.359 20:12:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:21.359 20:12:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:21.930 20:12:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:21.930 20:12:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.930 20:12:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:21.930 [2024-12-08 20:12:53.605341] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:21.930 [2024-12-08 20:12:53.605554] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:21.930 [2024-12-08 20:12:53.605571] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:21.930 [2024-12-08 20:12:53.605610] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:21.930 [2024-12-08 20:12:53.620045] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:17:21.930 20:12:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.930 20:12:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@757 -- # sleep 1 00:17:21.930 [2024-12-08 20:12:53.621879] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:22.872 20:12:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:22.872 20:12:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:22.872 20:12:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:22.872 20:12:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:22.872 20:12:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:22.872 20:12:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:22.872 20:12:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:22.872 20:12:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.872 20:12:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:22.872 20:12:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.872 20:12:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:22.872 "name": "raid_bdev1", 00:17:22.872 "uuid": "13b6c6ce-683b-4ee4-ab59-a3ac0426e573", 00:17:22.872 "strip_size_kb": 0, 00:17:22.872 "state": "online", 00:17:22.872 "raid_level": "raid1", 00:17:22.872 "superblock": true, 00:17:22.872 "num_base_bdevs": 2, 00:17:22.872 "num_base_bdevs_discovered": 2, 00:17:22.872 "num_base_bdevs_operational": 2, 00:17:22.872 "process": { 00:17:22.872 "type": "rebuild", 00:17:22.872 "target": "spare", 00:17:22.872 "progress": { 00:17:22.872 "blocks": 2560, 00:17:22.872 "percent": 32 00:17:22.872 } 00:17:22.872 }, 00:17:22.872 "base_bdevs_list": [ 00:17:22.872 { 00:17:22.872 "name": "spare", 00:17:22.872 "uuid": "c0790667-0fcc-5224-b5bd-1bc8163c3347", 00:17:22.872 "is_configured": true, 00:17:22.872 "data_offset": 256, 00:17:22.872 "data_size": 7936 00:17:22.872 }, 00:17:22.872 { 00:17:22.872 "name": "BaseBdev2", 00:17:22.872 "uuid": "41a6d7b3-3e85-56ed-b559-d4967a7ba71e", 00:17:22.872 "is_configured": true, 00:17:22.872 "data_offset": 256, 00:17:22.872 "data_size": 7936 00:17:22.872 } 00:17:22.872 ] 00:17:22.872 }' 00:17:22.872 20:12:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:22.872 20:12:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:22.872 20:12:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:22.872 20:12:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:22.872 20:12:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:17:22.872 20:12:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.872 20:12:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:22.872 [2024-12-08 20:12:54.753886] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:22.872 [2024-12-08 20:12:54.826757] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:22.872 [2024-12-08 20:12:54.826875] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:22.872 [2024-12-08 20:12:54.826924] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:22.872 [2024-12-08 20:12:54.826989] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:22.872 20:12:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.132 20:12:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:23.132 20:12:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:23.132 20:12:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:23.132 20:12:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:23.132 20:12:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:23.132 20:12:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:23.132 20:12:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:23.132 20:12:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:23.132 20:12:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:23.133 20:12:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:23.133 20:12:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:23.133 20:12:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.133 20:12:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:23.133 20:12:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:23.133 20:12:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.133 20:12:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:23.133 "name": "raid_bdev1", 00:17:23.133 "uuid": "13b6c6ce-683b-4ee4-ab59-a3ac0426e573", 00:17:23.133 "strip_size_kb": 0, 00:17:23.133 "state": "online", 00:17:23.133 "raid_level": "raid1", 00:17:23.133 "superblock": true, 00:17:23.133 "num_base_bdevs": 2, 00:17:23.133 "num_base_bdevs_discovered": 1, 00:17:23.133 "num_base_bdevs_operational": 1, 00:17:23.133 "base_bdevs_list": [ 00:17:23.133 { 00:17:23.133 "name": null, 00:17:23.133 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:23.133 "is_configured": false, 00:17:23.133 "data_offset": 0, 00:17:23.133 "data_size": 7936 00:17:23.133 }, 00:17:23.133 { 00:17:23.133 "name": "BaseBdev2", 00:17:23.133 "uuid": "41a6d7b3-3e85-56ed-b559-d4967a7ba71e", 00:17:23.133 "is_configured": true, 00:17:23.133 "data_offset": 256, 00:17:23.133 "data_size": 7936 00:17:23.133 } 00:17:23.133 ] 00:17:23.133 }' 00:17:23.133 20:12:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:23.133 20:12:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:23.392 20:12:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:23.392 20:12:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.392 20:12:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:23.392 [2024-12-08 20:12:55.293937] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:23.392 [2024-12-08 20:12:55.294008] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:23.392 [2024-12-08 20:12:55.294035] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:17:23.392 [2024-12-08 20:12:55.294046] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:23.392 [2024-12-08 20:12:55.294304] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:23.392 [2024-12-08 20:12:55.294328] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:23.392 [2024-12-08 20:12:55.294400] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:23.392 [2024-12-08 20:12:55.294414] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:23.392 [2024-12-08 20:12:55.294424] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:23.392 [2024-12-08 20:12:55.294445] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:23.392 [2024-12-08 20:12:55.307792] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:17:23.392 spare 00:17:23.392 20:12:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.392 20:12:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@764 -- # sleep 1 00:17:23.392 [2024-12-08 20:12:55.309581] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:24.771 20:12:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:24.772 20:12:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:24.772 20:12:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:24.772 20:12:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:24.772 20:12:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:24.772 20:12:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:24.772 20:12:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.772 20:12:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:24.772 20:12:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:24.772 20:12:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.772 20:12:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:24.772 "name": "raid_bdev1", 00:17:24.772 "uuid": "13b6c6ce-683b-4ee4-ab59-a3ac0426e573", 00:17:24.772 "strip_size_kb": 0, 00:17:24.772 "state": "online", 00:17:24.772 "raid_level": "raid1", 00:17:24.772 "superblock": true, 00:17:24.772 "num_base_bdevs": 2, 00:17:24.772 "num_base_bdevs_discovered": 2, 00:17:24.772 "num_base_bdevs_operational": 2, 00:17:24.772 "process": { 00:17:24.772 "type": "rebuild", 00:17:24.772 "target": "spare", 00:17:24.772 "progress": { 00:17:24.772 "blocks": 2560, 00:17:24.772 "percent": 32 00:17:24.772 } 00:17:24.772 }, 00:17:24.772 "base_bdevs_list": [ 00:17:24.772 { 00:17:24.772 "name": "spare", 00:17:24.772 "uuid": "c0790667-0fcc-5224-b5bd-1bc8163c3347", 00:17:24.772 "is_configured": true, 00:17:24.772 "data_offset": 256, 00:17:24.772 "data_size": 7936 00:17:24.772 }, 00:17:24.772 { 00:17:24.772 "name": "BaseBdev2", 00:17:24.772 "uuid": "41a6d7b3-3e85-56ed-b559-d4967a7ba71e", 00:17:24.772 "is_configured": true, 00:17:24.772 "data_offset": 256, 00:17:24.772 "data_size": 7936 00:17:24.772 } 00:17:24.772 ] 00:17:24.772 }' 00:17:24.772 20:12:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:24.772 20:12:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:24.772 20:12:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:24.772 20:12:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:24.772 20:12:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:17:24.772 20:12:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.772 20:12:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:24.772 [2024-12-08 20:12:56.442081] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:24.772 [2024-12-08 20:12:56.514416] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:24.772 [2024-12-08 20:12:56.514485] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:24.772 [2024-12-08 20:12:56.514502] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:24.772 [2024-12-08 20:12:56.514508] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:24.772 20:12:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.772 20:12:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:24.772 20:12:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:24.772 20:12:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:24.772 20:12:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:24.772 20:12:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:24.772 20:12:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:24.772 20:12:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:24.772 20:12:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:24.772 20:12:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:24.772 20:12:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:24.772 20:12:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:24.772 20:12:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.772 20:12:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:24.772 20:12:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:24.772 20:12:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.772 20:12:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:24.772 "name": "raid_bdev1", 00:17:24.772 "uuid": "13b6c6ce-683b-4ee4-ab59-a3ac0426e573", 00:17:24.772 "strip_size_kb": 0, 00:17:24.772 "state": "online", 00:17:24.772 "raid_level": "raid1", 00:17:24.772 "superblock": true, 00:17:24.772 "num_base_bdevs": 2, 00:17:24.772 "num_base_bdevs_discovered": 1, 00:17:24.772 "num_base_bdevs_operational": 1, 00:17:24.772 "base_bdevs_list": [ 00:17:24.772 { 00:17:24.772 "name": null, 00:17:24.772 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:24.772 "is_configured": false, 00:17:24.772 "data_offset": 0, 00:17:24.772 "data_size": 7936 00:17:24.772 }, 00:17:24.772 { 00:17:24.772 "name": "BaseBdev2", 00:17:24.772 "uuid": "41a6d7b3-3e85-56ed-b559-d4967a7ba71e", 00:17:24.772 "is_configured": true, 00:17:24.772 "data_offset": 256, 00:17:24.772 "data_size": 7936 00:17:24.772 } 00:17:24.772 ] 00:17:24.772 }' 00:17:24.772 20:12:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:24.772 20:12:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:25.032 20:12:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:25.032 20:12:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:25.032 20:12:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:25.032 20:12:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:25.032 20:12:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:25.032 20:12:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:25.032 20:12:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:25.032 20:12:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.032 20:12:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:25.032 20:12:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.032 20:12:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:25.032 "name": "raid_bdev1", 00:17:25.032 "uuid": "13b6c6ce-683b-4ee4-ab59-a3ac0426e573", 00:17:25.032 "strip_size_kb": 0, 00:17:25.032 "state": "online", 00:17:25.032 "raid_level": "raid1", 00:17:25.032 "superblock": true, 00:17:25.032 "num_base_bdevs": 2, 00:17:25.032 "num_base_bdevs_discovered": 1, 00:17:25.032 "num_base_bdevs_operational": 1, 00:17:25.032 "base_bdevs_list": [ 00:17:25.032 { 00:17:25.032 "name": null, 00:17:25.032 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:25.032 "is_configured": false, 00:17:25.032 "data_offset": 0, 00:17:25.032 "data_size": 7936 00:17:25.032 }, 00:17:25.032 { 00:17:25.032 "name": "BaseBdev2", 00:17:25.032 "uuid": "41a6d7b3-3e85-56ed-b559-d4967a7ba71e", 00:17:25.032 "is_configured": true, 00:17:25.032 "data_offset": 256, 00:17:25.032 "data_size": 7936 00:17:25.032 } 00:17:25.032 ] 00:17:25.032 }' 00:17:25.032 20:12:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:25.333 20:12:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:25.333 20:12:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:25.333 20:12:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:25.333 20:12:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:17:25.333 20:12:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.333 20:12:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:25.333 20:12:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.333 20:12:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:25.333 20:12:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.333 20:12:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:25.333 [2024-12-08 20:12:57.073152] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:25.333 [2024-12-08 20:12:57.073240] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:25.333 [2024-12-08 20:12:57.073281] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:17:25.333 [2024-12-08 20:12:57.073308] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:25.333 [2024-12-08 20:12:57.073575] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:25.333 [2024-12-08 20:12:57.073620] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:25.333 [2024-12-08 20:12:57.073705] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:17:25.333 [2024-12-08 20:12:57.073747] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:25.333 [2024-12-08 20:12:57.073801] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:25.333 [2024-12-08 20:12:57.073838] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:17:25.333 BaseBdev1 00:17:25.333 20:12:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.333 20:12:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@775 -- # sleep 1 00:17:26.305 20:12:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:26.305 20:12:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:26.305 20:12:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:26.305 20:12:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:26.305 20:12:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:26.305 20:12:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:26.305 20:12:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:26.305 20:12:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:26.305 20:12:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:26.305 20:12:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:26.305 20:12:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:26.305 20:12:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:26.305 20:12:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.305 20:12:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:26.305 20:12:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.305 20:12:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:26.305 "name": "raid_bdev1", 00:17:26.305 "uuid": "13b6c6ce-683b-4ee4-ab59-a3ac0426e573", 00:17:26.305 "strip_size_kb": 0, 00:17:26.305 "state": "online", 00:17:26.305 "raid_level": "raid1", 00:17:26.305 "superblock": true, 00:17:26.305 "num_base_bdevs": 2, 00:17:26.305 "num_base_bdevs_discovered": 1, 00:17:26.305 "num_base_bdevs_operational": 1, 00:17:26.305 "base_bdevs_list": [ 00:17:26.305 { 00:17:26.305 "name": null, 00:17:26.305 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:26.305 "is_configured": false, 00:17:26.305 "data_offset": 0, 00:17:26.305 "data_size": 7936 00:17:26.305 }, 00:17:26.305 { 00:17:26.305 "name": "BaseBdev2", 00:17:26.305 "uuid": "41a6d7b3-3e85-56ed-b559-d4967a7ba71e", 00:17:26.305 "is_configured": true, 00:17:26.305 "data_offset": 256, 00:17:26.305 "data_size": 7936 00:17:26.305 } 00:17:26.305 ] 00:17:26.305 }' 00:17:26.305 20:12:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:26.305 20:12:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:26.607 20:12:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:26.607 20:12:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:26.607 20:12:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:26.607 20:12:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:26.607 20:12:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:26.607 20:12:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:26.607 20:12:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:26.607 20:12:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.607 20:12:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:26.607 20:12:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.607 20:12:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:26.607 "name": "raid_bdev1", 00:17:26.607 "uuid": "13b6c6ce-683b-4ee4-ab59-a3ac0426e573", 00:17:26.607 "strip_size_kb": 0, 00:17:26.607 "state": "online", 00:17:26.607 "raid_level": "raid1", 00:17:26.607 "superblock": true, 00:17:26.607 "num_base_bdevs": 2, 00:17:26.607 "num_base_bdevs_discovered": 1, 00:17:26.607 "num_base_bdevs_operational": 1, 00:17:26.607 "base_bdevs_list": [ 00:17:26.607 { 00:17:26.607 "name": null, 00:17:26.607 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:26.607 "is_configured": false, 00:17:26.607 "data_offset": 0, 00:17:26.607 "data_size": 7936 00:17:26.607 }, 00:17:26.607 { 00:17:26.607 "name": "BaseBdev2", 00:17:26.607 "uuid": "41a6d7b3-3e85-56ed-b559-d4967a7ba71e", 00:17:26.607 "is_configured": true, 00:17:26.607 "data_offset": 256, 00:17:26.607 "data_size": 7936 00:17:26.607 } 00:17:26.607 ] 00:17:26.607 }' 00:17:26.607 20:12:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:26.607 20:12:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:26.607 20:12:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:26.866 20:12:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:26.866 20:12:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:26.866 20:12:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@652 -- # local es=0 00:17:26.866 20:12:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:26.866 20:12:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:26.866 20:12:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:26.866 20:12:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:26.866 20:12:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:26.867 20:12:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:26.867 20:12:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.867 20:12:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:26.867 [2024-12-08 20:12:58.602635] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:26.867 [2024-12-08 20:12:58.602801] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:26.867 [2024-12-08 20:12:58.602814] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:26.867 request: 00:17:26.867 { 00:17:26.867 "base_bdev": "BaseBdev1", 00:17:26.867 "raid_bdev": "raid_bdev1", 00:17:26.867 "method": "bdev_raid_add_base_bdev", 00:17:26.867 "req_id": 1 00:17:26.867 } 00:17:26.867 Got JSON-RPC error response 00:17:26.867 response: 00:17:26.867 { 00:17:26.867 "code": -22, 00:17:26.867 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:17:26.867 } 00:17:26.867 20:12:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:26.867 20:12:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@655 -- # es=1 00:17:26.867 20:12:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:26.867 20:12:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:26.867 20:12:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:26.867 20:12:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@779 -- # sleep 1 00:17:27.805 20:12:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:27.805 20:12:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:27.805 20:12:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:27.805 20:12:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:27.805 20:12:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:27.805 20:12:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:27.805 20:12:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:27.805 20:12:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:27.805 20:12:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:27.805 20:12:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:27.805 20:12:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:27.805 20:12:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.805 20:12:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:27.805 20:12:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:27.805 20:12:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.805 20:12:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:27.805 "name": "raid_bdev1", 00:17:27.805 "uuid": "13b6c6ce-683b-4ee4-ab59-a3ac0426e573", 00:17:27.805 "strip_size_kb": 0, 00:17:27.805 "state": "online", 00:17:27.805 "raid_level": "raid1", 00:17:27.805 "superblock": true, 00:17:27.805 "num_base_bdevs": 2, 00:17:27.805 "num_base_bdevs_discovered": 1, 00:17:27.805 "num_base_bdevs_operational": 1, 00:17:27.805 "base_bdevs_list": [ 00:17:27.805 { 00:17:27.805 "name": null, 00:17:27.805 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:27.805 "is_configured": false, 00:17:27.805 "data_offset": 0, 00:17:27.805 "data_size": 7936 00:17:27.805 }, 00:17:27.805 { 00:17:27.805 "name": "BaseBdev2", 00:17:27.805 "uuid": "41a6d7b3-3e85-56ed-b559-d4967a7ba71e", 00:17:27.805 "is_configured": true, 00:17:27.805 "data_offset": 256, 00:17:27.805 "data_size": 7936 00:17:27.805 } 00:17:27.805 ] 00:17:27.805 }' 00:17:27.805 20:12:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:27.805 20:12:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:28.063 20:13:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:28.063 20:13:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:28.063 20:13:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:28.063 20:13:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:28.063 20:13:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:28.323 20:13:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:28.323 20:13:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:28.323 20:13:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.323 20:13:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:28.323 20:13:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.323 20:13:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:28.323 "name": "raid_bdev1", 00:17:28.323 "uuid": "13b6c6ce-683b-4ee4-ab59-a3ac0426e573", 00:17:28.323 "strip_size_kb": 0, 00:17:28.323 "state": "online", 00:17:28.323 "raid_level": "raid1", 00:17:28.323 "superblock": true, 00:17:28.323 "num_base_bdevs": 2, 00:17:28.323 "num_base_bdevs_discovered": 1, 00:17:28.323 "num_base_bdevs_operational": 1, 00:17:28.323 "base_bdevs_list": [ 00:17:28.323 { 00:17:28.323 "name": null, 00:17:28.323 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:28.323 "is_configured": false, 00:17:28.323 "data_offset": 0, 00:17:28.323 "data_size": 7936 00:17:28.323 }, 00:17:28.323 { 00:17:28.323 "name": "BaseBdev2", 00:17:28.323 "uuid": "41a6d7b3-3e85-56ed-b559-d4967a7ba71e", 00:17:28.323 "is_configured": true, 00:17:28.323 "data_offset": 256, 00:17:28.323 "data_size": 7936 00:17:28.323 } 00:17:28.323 ] 00:17:28.323 }' 00:17:28.323 20:13:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:28.323 20:13:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:28.323 20:13:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:28.323 20:13:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:28.323 20:13:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@784 -- # killprocess 87402 00:17:28.323 20:13:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' -z 87402 ']' 00:17:28.323 20:13:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@958 -- # kill -0 87402 00:17:28.323 20:13:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@959 -- # uname 00:17:28.323 20:13:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:28.323 20:13:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87402 00:17:28.323 20:13:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:28.323 20:13:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:28.323 killing process with pid 87402 00:17:28.323 Received shutdown signal, test time was about 60.000000 seconds 00:17:28.323 00:17:28.323 Latency(us) 00:17:28.323 [2024-12-08T20:13:00.301Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:28.323 [2024-12-08T20:13:00.301Z] =================================================================================================================== 00:17:28.323 [2024-12-08T20:13:00.301Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:28.323 20:13:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87402' 00:17:28.323 20:13:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@973 -- # kill 87402 00:17:28.323 [2024-12-08 20:13:00.204322] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:28.323 [2024-12-08 20:13:00.204443] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:28.323 20:13:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@978 -- # wait 87402 00:17:28.323 [2024-12-08 20:13:00.204493] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:28.323 [2024-12-08 20:13:00.204504] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:17:28.583 [2024-12-08 20:13:00.520253] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:29.967 20:13:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@786 -- # return 0 00:17:29.967 00:17:29.967 real 0m19.147s 00:17:29.967 user 0m24.757s 00:17:29.967 sys 0m2.353s 00:17:29.967 20:13:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:29.967 ************************************ 00:17:29.967 END TEST raid_rebuild_test_sb_md_separate 00:17:29.967 ************************************ 00:17:29.967 20:13:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:29.967 20:13:01 bdev_raid -- bdev/bdev_raid.sh@1010 -- # base_malloc_params='-m 32 -i' 00:17:29.967 20:13:01 bdev_raid -- bdev/bdev_raid.sh@1011 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:17:29.967 20:13:01 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:17:29.967 20:13:01 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:29.967 20:13:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:29.967 ************************************ 00:17:29.967 START TEST raid_state_function_test_sb_md_interleaved 00:17:29.967 ************************************ 00:17:29.967 20:13:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:17:29.967 20:13:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:17:29.967 20:13:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:17:29.967 20:13:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:17:29.967 20:13:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:17:29.967 20:13:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:17:29.967 20:13:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:29.967 20:13:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:17:29.967 20:13:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:29.967 20:13:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:29.967 20:13:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:17:29.967 20:13:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:29.967 20:13:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:29.967 20:13:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:29.967 20:13:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:17:29.967 20:13:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:17:29.967 20:13:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # local strip_size 00:17:29.967 20:13:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:17:29.967 20:13:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:17:29.967 20:13:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:17:29.967 20:13:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:17:29.967 20:13:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:17:29.967 20:13:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:17:29.967 20:13:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@229 -- # raid_pid=88082 00:17:29.967 20:13:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:17:29.967 20:13:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 88082' 00:17:29.967 Process raid pid: 88082 00:17:29.967 20:13:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@231 -- # waitforlisten 88082 00:17:29.967 20:13:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 88082 ']' 00:17:29.967 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:29.967 20:13:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:29.967 20:13:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:29.967 20:13:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:29.967 20:13:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:29.967 20:13:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:29.967 [2024-12-08 20:13:01.749184] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:17:29.967 [2024-12-08 20:13:01.749294] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:29.967 [2024-12-08 20:13:01.921560] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:30.227 [2024-12-08 20:13:02.030175] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:30.486 [2024-12-08 20:13:02.224720] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:30.486 [2024-12-08 20:13:02.224758] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:30.746 20:13:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:30.746 20:13:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:17:30.746 20:13:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:30.746 20:13:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.746 20:13:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:30.746 [2024-12-08 20:13:02.572007] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:30.746 [2024-12-08 20:13:02.572124] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:30.746 [2024-12-08 20:13:02.572157] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:30.746 [2024-12-08 20:13:02.572182] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:30.746 20:13:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.746 20:13:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:30.746 20:13:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:30.746 20:13:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:30.746 20:13:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:30.746 20:13:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:30.746 20:13:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:30.746 20:13:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:30.746 20:13:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:30.746 20:13:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:30.746 20:13:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:30.746 20:13:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:30.746 20:13:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:30.746 20:13:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.746 20:13:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:30.746 20:13:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.746 20:13:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:30.746 "name": "Existed_Raid", 00:17:30.746 "uuid": "6c18f220-263c-4b39-bc8b-224773a425d2", 00:17:30.746 "strip_size_kb": 0, 00:17:30.746 "state": "configuring", 00:17:30.746 "raid_level": "raid1", 00:17:30.746 "superblock": true, 00:17:30.746 "num_base_bdevs": 2, 00:17:30.746 "num_base_bdevs_discovered": 0, 00:17:30.746 "num_base_bdevs_operational": 2, 00:17:30.746 "base_bdevs_list": [ 00:17:30.746 { 00:17:30.746 "name": "BaseBdev1", 00:17:30.746 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:30.746 "is_configured": false, 00:17:30.746 "data_offset": 0, 00:17:30.746 "data_size": 0 00:17:30.746 }, 00:17:30.746 { 00:17:30.746 "name": "BaseBdev2", 00:17:30.746 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:30.746 "is_configured": false, 00:17:30.746 "data_offset": 0, 00:17:30.746 "data_size": 0 00:17:30.746 } 00:17:30.746 ] 00:17:30.746 }' 00:17:30.746 20:13:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:30.746 20:13:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:31.316 20:13:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:31.316 20:13:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.316 20:13:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:31.316 [2024-12-08 20:13:03.055111] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:31.316 [2024-12-08 20:13:03.055185] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:17:31.316 20:13:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.316 20:13:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:31.316 20:13:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.316 20:13:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:31.316 [2024-12-08 20:13:03.067094] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:31.316 [2024-12-08 20:13:03.067166] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:31.316 [2024-12-08 20:13:03.067193] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:31.316 [2024-12-08 20:13:03.067218] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:31.316 20:13:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.316 20:13:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:17:31.316 20:13:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.316 20:13:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:31.316 [2024-12-08 20:13:03.114100] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:31.316 BaseBdev1 00:17:31.316 20:13:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.316 20:13:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:17:31.316 20:13:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:17:31.316 20:13:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:31.316 20:13:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # local i 00:17:31.316 20:13:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:31.316 20:13:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:31.316 20:13:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:31.316 20:13:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.316 20:13:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:31.316 20:13:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.316 20:13:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:31.316 20:13:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.316 20:13:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:31.316 [ 00:17:31.316 { 00:17:31.316 "name": "BaseBdev1", 00:17:31.316 "aliases": [ 00:17:31.316 "096a47e2-4a19-4749-b473-89faa7925471" 00:17:31.316 ], 00:17:31.316 "product_name": "Malloc disk", 00:17:31.316 "block_size": 4128, 00:17:31.316 "num_blocks": 8192, 00:17:31.316 "uuid": "096a47e2-4a19-4749-b473-89faa7925471", 00:17:31.316 "md_size": 32, 00:17:31.316 "md_interleave": true, 00:17:31.316 "dif_type": 0, 00:17:31.316 "assigned_rate_limits": { 00:17:31.316 "rw_ios_per_sec": 0, 00:17:31.316 "rw_mbytes_per_sec": 0, 00:17:31.316 "r_mbytes_per_sec": 0, 00:17:31.316 "w_mbytes_per_sec": 0 00:17:31.316 }, 00:17:31.316 "claimed": true, 00:17:31.316 "claim_type": "exclusive_write", 00:17:31.316 "zoned": false, 00:17:31.316 "supported_io_types": { 00:17:31.316 "read": true, 00:17:31.316 "write": true, 00:17:31.316 "unmap": true, 00:17:31.316 "flush": true, 00:17:31.316 "reset": true, 00:17:31.316 "nvme_admin": false, 00:17:31.316 "nvme_io": false, 00:17:31.316 "nvme_io_md": false, 00:17:31.316 "write_zeroes": true, 00:17:31.316 "zcopy": true, 00:17:31.316 "get_zone_info": false, 00:17:31.316 "zone_management": false, 00:17:31.316 "zone_append": false, 00:17:31.316 "compare": false, 00:17:31.316 "compare_and_write": false, 00:17:31.316 "abort": true, 00:17:31.316 "seek_hole": false, 00:17:31.316 "seek_data": false, 00:17:31.316 "copy": true, 00:17:31.316 "nvme_iov_md": false 00:17:31.316 }, 00:17:31.316 "memory_domains": [ 00:17:31.316 { 00:17:31.316 "dma_device_id": "system", 00:17:31.316 "dma_device_type": 1 00:17:31.316 }, 00:17:31.316 { 00:17:31.316 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:31.316 "dma_device_type": 2 00:17:31.316 } 00:17:31.316 ], 00:17:31.316 "driver_specific": {} 00:17:31.316 } 00:17:31.316 ] 00:17:31.316 20:13:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.316 20:13:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@911 -- # return 0 00:17:31.316 20:13:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:31.316 20:13:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:31.316 20:13:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:31.316 20:13:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:31.316 20:13:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:31.316 20:13:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:31.317 20:13:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:31.317 20:13:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:31.317 20:13:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:31.317 20:13:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:31.317 20:13:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:31.317 20:13:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:31.317 20:13:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.317 20:13:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:31.317 20:13:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.317 20:13:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:31.317 "name": "Existed_Raid", 00:17:31.317 "uuid": "28ab8e33-13a4-49cd-912a-55c3014f84fd", 00:17:31.317 "strip_size_kb": 0, 00:17:31.317 "state": "configuring", 00:17:31.317 "raid_level": "raid1", 00:17:31.317 "superblock": true, 00:17:31.317 "num_base_bdevs": 2, 00:17:31.317 "num_base_bdevs_discovered": 1, 00:17:31.317 "num_base_bdevs_operational": 2, 00:17:31.317 "base_bdevs_list": [ 00:17:31.317 { 00:17:31.317 "name": "BaseBdev1", 00:17:31.317 "uuid": "096a47e2-4a19-4749-b473-89faa7925471", 00:17:31.317 "is_configured": true, 00:17:31.317 "data_offset": 256, 00:17:31.317 "data_size": 7936 00:17:31.317 }, 00:17:31.317 { 00:17:31.317 "name": "BaseBdev2", 00:17:31.317 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:31.317 "is_configured": false, 00:17:31.317 "data_offset": 0, 00:17:31.317 "data_size": 0 00:17:31.317 } 00:17:31.317 ] 00:17:31.317 }' 00:17:31.317 20:13:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:31.317 20:13:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:31.577 20:13:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:31.577 20:13:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.577 20:13:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:31.577 [2024-12-08 20:13:03.537427] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:31.577 [2024-12-08 20:13:03.537471] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:17:31.577 20:13:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.577 20:13:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:31.577 20:13:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.577 20:13:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:31.577 [2024-12-08 20:13:03.549469] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:31.577 [2024-12-08 20:13:03.551413] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:31.577 [2024-12-08 20:13:03.551489] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:31.836 20:13:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.836 20:13:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:17:31.836 20:13:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:31.836 20:13:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:31.836 20:13:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:31.836 20:13:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:31.836 20:13:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:31.836 20:13:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:31.836 20:13:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:31.836 20:13:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:31.836 20:13:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:31.836 20:13:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:31.836 20:13:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:31.836 20:13:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:31.836 20:13:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.836 20:13:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:31.836 20:13:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:31.836 20:13:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.836 20:13:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:31.836 "name": "Existed_Raid", 00:17:31.836 "uuid": "f934de42-74c5-454b-9e30-2504ed115cf2", 00:17:31.836 "strip_size_kb": 0, 00:17:31.836 "state": "configuring", 00:17:31.836 "raid_level": "raid1", 00:17:31.836 "superblock": true, 00:17:31.836 "num_base_bdevs": 2, 00:17:31.836 "num_base_bdevs_discovered": 1, 00:17:31.836 "num_base_bdevs_operational": 2, 00:17:31.836 "base_bdevs_list": [ 00:17:31.836 { 00:17:31.836 "name": "BaseBdev1", 00:17:31.836 "uuid": "096a47e2-4a19-4749-b473-89faa7925471", 00:17:31.836 "is_configured": true, 00:17:31.836 "data_offset": 256, 00:17:31.836 "data_size": 7936 00:17:31.836 }, 00:17:31.836 { 00:17:31.836 "name": "BaseBdev2", 00:17:31.836 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:31.836 "is_configured": false, 00:17:31.836 "data_offset": 0, 00:17:31.836 "data_size": 0 00:17:31.836 } 00:17:31.836 ] 00:17:31.836 }' 00:17:31.836 20:13:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:31.836 20:13:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:32.097 20:13:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:17:32.097 20:13:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.097 20:13:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:32.097 [2024-12-08 20:13:04.006212] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:32.097 [2024-12-08 20:13:04.006538] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:32.097 [2024-12-08 20:13:04.006588] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:17:32.097 [2024-12-08 20:13:04.006720] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:17:32.097 [2024-12-08 20:13:04.006839] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:32.097 [2024-12-08 20:13:04.006877] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:17:32.097 [2024-12-08 20:13:04.007019] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:32.097 BaseBdev2 00:17:32.097 20:13:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.097 20:13:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:17:32.097 20:13:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:17:32.097 20:13:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:32.097 20:13:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # local i 00:17:32.097 20:13:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:32.097 20:13:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:32.097 20:13:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:32.097 20:13:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.097 20:13:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:32.097 20:13:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.097 20:13:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:32.097 20:13:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.097 20:13:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:32.097 [ 00:17:32.097 { 00:17:32.097 "name": "BaseBdev2", 00:17:32.097 "aliases": [ 00:17:32.097 "fdc6835a-a1c8-44be-8f40-50381ab0d61f" 00:17:32.097 ], 00:17:32.097 "product_name": "Malloc disk", 00:17:32.097 "block_size": 4128, 00:17:32.097 "num_blocks": 8192, 00:17:32.097 "uuid": "fdc6835a-a1c8-44be-8f40-50381ab0d61f", 00:17:32.097 "md_size": 32, 00:17:32.097 "md_interleave": true, 00:17:32.097 "dif_type": 0, 00:17:32.097 "assigned_rate_limits": { 00:17:32.097 "rw_ios_per_sec": 0, 00:17:32.097 "rw_mbytes_per_sec": 0, 00:17:32.097 "r_mbytes_per_sec": 0, 00:17:32.097 "w_mbytes_per_sec": 0 00:17:32.097 }, 00:17:32.097 "claimed": true, 00:17:32.097 "claim_type": "exclusive_write", 00:17:32.097 "zoned": false, 00:17:32.097 "supported_io_types": { 00:17:32.097 "read": true, 00:17:32.097 "write": true, 00:17:32.097 "unmap": true, 00:17:32.097 "flush": true, 00:17:32.097 "reset": true, 00:17:32.097 "nvme_admin": false, 00:17:32.097 "nvme_io": false, 00:17:32.097 "nvme_io_md": false, 00:17:32.097 "write_zeroes": true, 00:17:32.097 "zcopy": true, 00:17:32.097 "get_zone_info": false, 00:17:32.097 "zone_management": false, 00:17:32.097 "zone_append": false, 00:17:32.097 "compare": false, 00:17:32.097 "compare_and_write": false, 00:17:32.097 "abort": true, 00:17:32.097 "seek_hole": false, 00:17:32.097 "seek_data": false, 00:17:32.097 "copy": true, 00:17:32.097 "nvme_iov_md": false 00:17:32.097 }, 00:17:32.097 "memory_domains": [ 00:17:32.097 { 00:17:32.097 "dma_device_id": "system", 00:17:32.097 "dma_device_type": 1 00:17:32.097 }, 00:17:32.097 { 00:17:32.097 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:32.097 "dma_device_type": 2 00:17:32.097 } 00:17:32.097 ], 00:17:32.097 "driver_specific": {} 00:17:32.097 } 00:17:32.097 ] 00:17:32.097 20:13:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.097 20:13:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@911 -- # return 0 00:17:32.097 20:13:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:32.097 20:13:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:32.097 20:13:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:17:32.097 20:13:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:32.097 20:13:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:32.097 20:13:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:32.097 20:13:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:32.097 20:13:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:32.097 20:13:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:32.097 20:13:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:32.097 20:13:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:32.097 20:13:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:32.097 20:13:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:32.097 20:13:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:32.097 20:13:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.097 20:13:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:32.097 20:13:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.357 20:13:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:32.357 "name": "Existed_Raid", 00:17:32.357 "uuid": "f934de42-74c5-454b-9e30-2504ed115cf2", 00:17:32.357 "strip_size_kb": 0, 00:17:32.357 "state": "online", 00:17:32.357 "raid_level": "raid1", 00:17:32.357 "superblock": true, 00:17:32.357 "num_base_bdevs": 2, 00:17:32.357 "num_base_bdevs_discovered": 2, 00:17:32.357 "num_base_bdevs_operational": 2, 00:17:32.357 "base_bdevs_list": [ 00:17:32.357 { 00:17:32.357 "name": "BaseBdev1", 00:17:32.357 "uuid": "096a47e2-4a19-4749-b473-89faa7925471", 00:17:32.357 "is_configured": true, 00:17:32.357 "data_offset": 256, 00:17:32.357 "data_size": 7936 00:17:32.357 }, 00:17:32.357 { 00:17:32.357 "name": "BaseBdev2", 00:17:32.357 "uuid": "fdc6835a-a1c8-44be-8f40-50381ab0d61f", 00:17:32.357 "is_configured": true, 00:17:32.357 "data_offset": 256, 00:17:32.357 "data_size": 7936 00:17:32.357 } 00:17:32.357 ] 00:17:32.357 }' 00:17:32.357 20:13:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:32.357 20:13:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:32.617 20:13:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:17:32.617 20:13:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:32.617 20:13:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:32.617 20:13:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:32.617 20:13:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:17:32.617 20:13:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:32.617 20:13:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:32.617 20:13:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:32.617 20:13:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.617 20:13:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:32.617 [2024-12-08 20:13:04.497703] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:32.617 20:13:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.617 20:13:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:32.617 "name": "Existed_Raid", 00:17:32.617 "aliases": [ 00:17:32.617 "f934de42-74c5-454b-9e30-2504ed115cf2" 00:17:32.617 ], 00:17:32.617 "product_name": "Raid Volume", 00:17:32.617 "block_size": 4128, 00:17:32.617 "num_blocks": 7936, 00:17:32.617 "uuid": "f934de42-74c5-454b-9e30-2504ed115cf2", 00:17:32.617 "md_size": 32, 00:17:32.617 "md_interleave": true, 00:17:32.617 "dif_type": 0, 00:17:32.617 "assigned_rate_limits": { 00:17:32.617 "rw_ios_per_sec": 0, 00:17:32.617 "rw_mbytes_per_sec": 0, 00:17:32.617 "r_mbytes_per_sec": 0, 00:17:32.617 "w_mbytes_per_sec": 0 00:17:32.617 }, 00:17:32.617 "claimed": false, 00:17:32.617 "zoned": false, 00:17:32.617 "supported_io_types": { 00:17:32.617 "read": true, 00:17:32.617 "write": true, 00:17:32.617 "unmap": false, 00:17:32.617 "flush": false, 00:17:32.617 "reset": true, 00:17:32.617 "nvme_admin": false, 00:17:32.617 "nvme_io": false, 00:17:32.617 "nvme_io_md": false, 00:17:32.617 "write_zeroes": true, 00:17:32.617 "zcopy": false, 00:17:32.617 "get_zone_info": false, 00:17:32.617 "zone_management": false, 00:17:32.617 "zone_append": false, 00:17:32.617 "compare": false, 00:17:32.617 "compare_and_write": false, 00:17:32.617 "abort": false, 00:17:32.617 "seek_hole": false, 00:17:32.617 "seek_data": false, 00:17:32.617 "copy": false, 00:17:32.617 "nvme_iov_md": false 00:17:32.617 }, 00:17:32.617 "memory_domains": [ 00:17:32.617 { 00:17:32.617 "dma_device_id": "system", 00:17:32.617 "dma_device_type": 1 00:17:32.617 }, 00:17:32.617 { 00:17:32.617 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:32.617 "dma_device_type": 2 00:17:32.617 }, 00:17:32.617 { 00:17:32.617 "dma_device_id": "system", 00:17:32.617 "dma_device_type": 1 00:17:32.617 }, 00:17:32.617 { 00:17:32.617 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:32.617 "dma_device_type": 2 00:17:32.617 } 00:17:32.617 ], 00:17:32.617 "driver_specific": { 00:17:32.617 "raid": { 00:17:32.617 "uuid": "f934de42-74c5-454b-9e30-2504ed115cf2", 00:17:32.617 "strip_size_kb": 0, 00:17:32.617 "state": "online", 00:17:32.617 "raid_level": "raid1", 00:17:32.617 "superblock": true, 00:17:32.617 "num_base_bdevs": 2, 00:17:32.617 "num_base_bdevs_discovered": 2, 00:17:32.617 "num_base_bdevs_operational": 2, 00:17:32.617 "base_bdevs_list": [ 00:17:32.617 { 00:17:32.617 "name": "BaseBdev1", 00:17:32.617 "uuid": "096a47e2-4a19-4749-b473-89faa7925471", 00:17:32.617 "is_configured": true, 00:17:32.617 "data_offset": 256, 00:17:32.617 "data_size": 7936 00:17:32.617 }, 00:17:32.617 { 00:17:32.617 "name": "BaseBdev2", 00:17:32.617 "uuid": "fdc6835a-a1c8-44be-8f40-50381ab0d61f", 00:17:32.617 "is_configured": true, 00:17:32.617 "data_offset": 256, 00:17:32.617 "data_size": 7936 00:17:32.617 } 00:17:32.617 ] 00:17:32.617 } 00:17:32.617 } 00:17:32.617 }' 00:17:32.617 20:13:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:32.617 20:13:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:17:32.617 BaseBdev2' 00:17:32.617 20:13:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:32.877 20:13:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:17:32.877 20:13:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:32.877 20:13:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:32.877 20:13:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:17:32.877 20:13:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.877 20:13:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:32.877 20:13:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.877 20:13:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:17:32.877 20:13:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:17:32.877 20:13:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:32.877 20:13:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:32.877 20:13:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:32.877 20:13:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.877 20:13:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:32.877 20:13:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.877 20:13:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:17:32.877 20:13:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:17:32.877 20:13:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:32.878 20:13:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.878 20:13:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:32.878 [2024-12-08 20:13:04.713097] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:32.878 20:13:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.878 20:13:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # local expected_state 00:17:32.878 20:13:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:17:32.878 20:13:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:32.878 20:13:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:17:32.878 20:13:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:17:32.878 20:13:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:17:32.878 20:13:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:32.878 20:13:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:32.878 20:13:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:32.878 20:13:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:32.878 20:13:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:32.878 20:13:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:32.878 20:13:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:32.878 20:13:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:32.878 20:13:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:32.878 20:13:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:32.878 20:13:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:32.878 20:13:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.878 20:13:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:32.878 20:13:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.137 20:13:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:33.137 "name": "Existed_Raid", 00:17:33.137 "uuid": "f934de42-74c5-454b-9e30-2504ed115cf2", 00:17:33.137 "strip_size_kb": 0, 00:17:33.137 "state": "online", 00:17:33.137 "raid_level": "raid1", 00:17:33.137 "superblock": true, 00:17:33.137 "num_base_bdevs": 2, 00:17:33.137 "num_base_bdevs_discovered": 1, 00:17:33.137 "num_base_bdevs_operational": 1, 00:17:33.137 "base_bdevs_list": [ 00:17:33.137 { 00:17:33.137 "name": null, 00:17:33.137 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:33.138 "is_configured": false, 00:17:33.138 "data_offset": 0, 00:17:33.138 "data_size": 7936 00:17:33.138 }, 00:17:33.138 { 00:17:33.138 "name": "BaseBdev2", 00:17:33.138 "uuid": "fdc6835a-a1c8-44be-8f40-50381ab0d61f", 00:17:33.138 "is_configured": true, 00:17:33.138 "data_offset": 256, 00:17:33.138 "data_size": 7936 00:17:33.138 } 00:17:33.138 ] 00:17:33.138 }' 00:17:33.138 20:13:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:33.138 20:13:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:33.397 20:13:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:17:33.397 20:13:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:33.397 20:13:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:33.397 20:13:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:33.397 20:13:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.397 20:13:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:33.397 20:13:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.397 20:13:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:33.397 20:13:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:33.397 20:13:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:17:33.397 20:13:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.397 20:13:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:33.397 [2024-12-08 20:13:05.304507] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:33.397 [2024-12-08 20:13:05.304655] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:33.656 [2024-12-08 20:13:05.398776] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:33.656 [2024-12-08 20:13:05.398921] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:33.656 [2024-12-08 20:13:05.398939] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:17:33.656 20:13:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.656 20:13:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:33.656 20:13:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:33.656 20:13:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:33.656 20:13:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:17:33.656 20:13:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.656 20:13:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:33.656 20:13:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.656 20:13:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:17:33.656 20:13:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:17:33.656 20:13:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:17:33.656 20:13:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@326 -- # killprocess 88082 00:17:33.656 20:13:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 88082 ']' 00:17:33.656 20:13:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 88082 00:17:33.656 20:13:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:17:33.656 20:13:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:33.656 20:13:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88082 00:17:33.656 killing process with pid 88082 00:17:33.656 20:13:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:33.656 20:13:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:33.656 20:13:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88082' 00:17:33.656 20:13:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # kill 88082 00:17:33.656 [2024-12-08 20:13:05.480899] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:33.656 20:13:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@978 -- # wait 88082 00:17:33.656 [2024-12-08 20:13:05.497639] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:35.035 ************************************ 00:17:35.035 END TEST raid_state_function_test_sb_md_interleaved 00:17:35.035 ************************************ 00:17:35.035 20:13:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@328 -- # return 0 00:17:35.035 00:17:35.035 real 0m4.932s 00:17:35.035 user 0m7.117s 00:17:35.035 sys 0m0.824s 00:17:35.035 20:13:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:35.035 20:13:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:35.035 20:13:06 bdev_raid -- bdev/bdev_raid.sh@1012 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:17:35.035 20:13:06 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:17:35.035 20:13:06 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:35.035 20:13:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:35.035 ************************************ 00:17:35.035 START TEST raid_superblock_test_md_interleaved 00:17:35.035 ************************************ 00:17:35.035 20:13:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:17:35.035 20:13:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:17:35.035 20:13:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:17:35.035 20:13:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:17:35.035 20:13:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:17:35.035 20:13:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:17:35.035 20:13:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:17:35.035 20:13:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:17:35.035 20:13:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:17:35.035 20:13:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:17:35.035 20:13:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size 00:17:35.035 20:13:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:17:35.035 20:13:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:17:35.035 20:13:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:17:35.035 20:13:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:17:35.035 20:13:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:17:35.035 20:13:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # raid_pid=88329 00:17:35.035 20:13:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:17:35.035 20:13:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@413 -- # waitforlisten 88329 00:17:35.035 20:13:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 88329 ']' 00:17:35.035 20:13:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:35.035 20:13:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:35.035 20:13:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:35.035 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:35.035 20:13:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:35.035 20:13:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:35.035 [2024-12-08 20:13:06.739177] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:17:35.035 [2024-12-08 20:13:06.739388] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88329 ] 00:17:35.035 [2024-12-08 20:13:06.911254] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:35.295 [2024-12-08 20:13:07.022576] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:35.295 [2024-12-08 20:13:07.217807] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:35.295 [2024-12-08 20:13:07.217839] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:35.862 20:13:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:35.862 20:13:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:17:35.862 20:13:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:17:35.862 20:13:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:35.862 20:13:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:17:35.862 20:13:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:17:35.862 20:13:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:35.862 20:13:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:35.862 20:13:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:35.862 20:13:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:35.862 20:13:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:17:35.862 20:13:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.862 20:13:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:35.862 malloc1 00:17:35.862 20:13:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.862 20:13:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:35.862 20:13:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.862 20:13:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:35.862 [2024-12-08 20:13:07.613017] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:35.862 [2024-12-08 20:13:07.613109] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:35.862 [2024-12-08 20:13:07.613183] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:35.862 [2024-12-08 20:13:07.613220] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:35.862 [2024-12-08 20:13:07.614997] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:35.862 [2024-12-08 20:13:07.615062] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:35.862 pt1 00:17:35.862 20:13:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.862 20:13:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:35.862 20:13:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:35.862 20:13:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:17:35.862 20:13:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:17:35.862 20:13:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:35.862 20:13:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:35.862 20:13:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:35.862 20:13:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:35.862 20:13:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:17:35.862 20:13:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.862 20:13:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:35.862 malloc2 00:17:35.862 20:13:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.862 20:13:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:35.862 20:13:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.862 20:13:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:35.862 [2024-12-08 20:13:07.669469] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:35.862 [2024-12-08 20:13:07.669522] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:35.862 [2024-12-08 20:13:07.669557] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:35.862 [2024-12-08 20:13:07.669566] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:35.862 [2024-12-08 20:13:07.671308] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:35.862 [2024-12-08 20:13:07.671388] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:35.862 pt2 00:17:35.862 20:13:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.862 20:13:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:35.862 20:13:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:35.862 20:13:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:17:35.862 20:13:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.862 20:13:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:35.862 [2024-12-08 20:13:07.681483] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:35.862 [2024-12-08 20:13:07.683191] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:35.862 [2024-12-08 20:13:07.683382] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:35.862 [2024-12-08 20:13:07.683394] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:17:35.862 [2024-12-08 20:13:07.683462] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:17:35.862 [2024-12-08 20:13:07.683537] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:35.862 [2024-12-08 20:13:07.683549] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:35.862 [2024-12-08 20:13:07.683614] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:35.862 20:13:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.862 20:13:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:35.862 20:13:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:35.862 20:13:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:35.862 20:13:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:35.862 20:13:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:35.862 20:13:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:35.862 20:13:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:35.862 20:13:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:35.862 20:13:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:35.862 20:13:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:35.862 20:13:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:35.862 20:13:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:35.862 20:13:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.862 20:13:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:35.862 20:13:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.862 20:13:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:35.862 "name": "raid_bdev1", 00:17:35.862 "uuid": "7409e3c7-6ae1-45ed-9bbb-da545c1c37a2", 00:17:35.862 "strip_size_kb": 0, 00:17:35.862 "state": "online", 00:17:35.862 "raid_level": "raid1", 00:17:35.862 "superblock": true, 00:17:35.862 "num_base_bdevs": 2, 00:17:35.862 "num_base_bdevs_discovered": 2, 00:17:35.862 "num_base_bdevs_operational": 2, 00:17:35.862 "base_bdevs_list": [ 00:17:35.862 { 00:17:35.862 "name": "pt1", 00:17:35.862 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:35.862 "is_configured": true, 00:17:35.862 "data_offset": 256, 00:17:35.862 "data_size": 7936 00:17:35.862 }, 00:17:35.862 { 00:17:35.862 "name": "pt2", 00:17:35.862 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:35.862 "is_configured": true, 00:17:35.862 "data_offset": 256, 00:17:35.862 "data_size": 7936 00:17:35.862 } 00:17:35.862 ] 00:17:35.862 }' 00:17:35.862 20:13:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:35.862 20:13:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:36.431 20:13:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:17:36.431 20:13:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:36.431 20:13:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:36.431 20:13:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:36.431 20:13:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:17:36.431 20:13:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:36.431 20:13:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:36.431 20:13:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:36.431 20:13:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.431 20:13:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:36.431 [2024-12-08 20:13:08.156915] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:36.431 20:13:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.431 20:13:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:36.431 "name": "raid_bdev1", 00:17:36.431 "aliases": [ 00:17:36.431 "7409e3c7-6ae1-45ed-9bbb-da545c1c37a2" 00:17:36.431 ], 00:17:36.431 "product_name": "Raid Volume", 00:17:36.431 "block_size": 4128, 00:17:36.431 "num_blocks": 7936, 00:17:36.431 "uuid": "7409e3c7-6ae1-45ed-9bbb-da545c1c37a2", 00:17:36.431 "md_size": 32, 00:17:36.431 "md_interleave": true, 00:17:36.431 "dif_type": 0, 00:17:36.431 "assigned_rate_limits": { 00:17:36.431 "rw_ios_per_sec": 0, 00:17:36.431 "rw_mbytes_per_sec": 0, 00:17:36.431 "r_mbytes_per_sec": 0, 00:17:36.431 "w_mbytes_per_sec": 0 00:17:36.431 }, 00:17:36.431 "claimed": false, 00:17:36.431 "zoned": false, 00:17:36.431 "supported_io_types": { 00:17:36.431 "read": true, 00:17:36.431 "write": true, 00:17:36.431 "unmap": false, 00:17:36.431 "flush": false, 00:17:36.431 "reset": true, 00:17:36.431 "nvme_admin": false, 00:17:36.431 "nvme_io": false, 00:17:36.431 "nvme_io_md": false, 00:17:36.431 "write_zeroes": true, 00:17:36.431 "zcopy": false, 00:17:36.431 "get_zone_info": false, 00:17:36.431 "zone_management": false, 00:17:36.431 "zone_append": false, 00:17:36.431 "compare": false, 00:17:36.431 "compare_and_write": false, 00:17:36.431 "abort": false, 00:17:36.431 "seek_hole": false, 00:17:36.431 "seek_data": false, 00:17:36.431 "copy": false, 00:17:36.431 "nvme_iov_md": false 00:17:36.431 }, 00:17:36.431 "memory_domains": [ 00:17:36.431 { 00:17:36.431 "dma_device_id": "system", 00:17:36.431 "dma_device_type": 1 00:17:36.431 }, 00:17:36.431 { 00:17:36.431 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:36.431 "dma_device_type": 2 00:17:36.431 }, 00:17:36.431 { 00:17:36.431 "dma_device_id": "system", 00:17:36.431 "dma_device_type": 1 00:17:36.431 }, 00:17:36.431 { 00:17:36.431 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:36.431 "dma_device_type": 2 00:17:36.431 } 00:17:36.431 ], 00:17:36.431 "driver_specific": { 00:17:36.431 "raid": { 00:17:36.431 "uuid": "7409e3c7-6ae1-45ed-9bbb-da545c1c37a2", 00:17:36.431 "strip_size_kb": 0, 00:17:36.431 "state": "online", 00:17:36.431 "raid_level": "raid1", 00:17:36.431 "superblock": true, 00:17:36.431 "num_base_bdevs": 2, 00:17:36.431 "num_base_bdevs_discovered": 2, 00:17:36.431 "num_base_bdevs_operational": 2, 00:17:36.431 "base_bdevs_list": [ 00:17:36.431 { 00:17:36.431 "name": "pt1", 00:17:36.431 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:36.431 "is_configured": true, 00:17:36.431 "data_offset": 256, 00:17:36.431 "data_size": 7936 00:17:36.431 }, 00:17:36.431 { 00:17:36.431 "name": "pt2", 00:17:36.431 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:36.431 "is_configured": true, 00:17:36.431 "data_offset": 256, 00:17:36.431 "data_size": 7936 00:17:36.431 } 00:17:36.431 ] 00:17:36.431 } 00:17:36.431 } 00:17:36.431 }' 00:17:36.432 20:13:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:36.432 20:13:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:36.432 pt2' 00:17:36.432 20:13:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:36.432 20:13:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:17:36.432 20:13:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:36.432 20:13:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:36.432 20:13:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:36.432 20:13:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.432 20:13:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:36.432 20:13:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.432 20:13:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:17:36.432 20:13:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:17:36.432 20:13:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:36.432 20:13:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:36.432 20:13:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:36.432 20:13:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.432 20:13:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:36.432 20:13:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.432 20:13:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:17:36.432 20:13:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:17:36.432 20:13:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:36.432 20:13:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.432 20:13:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:36.432 20:13:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:17:36.432 [2024-12-08 20:13:08.384518] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:36.432 20:13:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.692 20:13:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=7409e3c7-6ae1-45ed-9bbb-da545c1c37a2 00:17:36.692 20:13:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@436 -- # '[' -z 7409e3c7-6ae1-45ed-9bbb-da545c1c37a2 ']' 00:17:36.692 20:13:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:36.692 20:13:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.692 20:13:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:36.692 [2024-12-08 20:13:08.432144] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:36.692 [2024-12-08 20:13:08.432207] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:36.692 [2024-12-08 20:13:08.432288] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:36.692 [2024-12-08 20:13:08.432346] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:36.692 [2024-12-08 20:13:08.432358] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:36.692 20:13:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.692 20:13:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:36.692 20:13:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.692 20:13:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:36.692 20:13:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:17:36.692 20:13:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.692 20:13:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:17:36.692 20:13:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:17:36.692 20:13:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:36.692 20:13:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:17:36.692 20:13:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.692 20:13:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:36.692 20:13:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.692 20:13:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:36.692 20:13:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:17:36.692 20:13:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.692 20:13:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:36.692 20:13:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.692 20:13:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:36.692 20:13:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:17:36.692 20:13:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.692 20:13:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:36.692 20:13:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.692 20:13:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:17:36.692 20:13:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:36.692 20:13:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@652 -- # local es=0 00:17:36.693 20:13:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:36.693 20:13:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:36.693 20:13:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:36.693 20:13:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:36.693 20:13:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:36.693 20:13:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:36.693 20:13:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.693 20:13:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:36.693 [2024-12-08 20:13:08.571915] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:36.693 [2024-12-08 20:13:08.573834] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:36.693 [2024-12-08 20:13:08.573909] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:17:36.693 [2024-12-08 20:13:08.573970] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:17:36.693 [2024-12-08 20:13:08.573986] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:36.693 [2024-12-08 20:13:08.573996] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:17:36.693 request: 00:17:36.693 { 00:17:36.693 "name": "raid_bdev1", 00:17:36.693 "raid_level": "raid1", 00:17:36.693 "base_bdevs": [ 00:17:36.693 "malloc1", 00:17:36.693 "malloc2" 00:17:36.693 ], 00:17:36.693 "superblock": false, 00:17:36.693 "method": "bdev_raid_create", 00:17:36.693 "req_id": 1 00:17:36.693 } 00:17:36.693 Got JSON-RPC error response 00:17:36.693 response: 00:17:36.693 { 00:17:36.693 "code": -17, 00:17:36.693 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:36.693 } 00:17:36.693 20:13:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:36.693 20:13:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@655 -- # es=1 00:17:36.693 20:13:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:36.693 20:13:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:36.693 20:13:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:36.693 20:13:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:17:36.693 20:13:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:36.693 20:13:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.693 20:13:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:36.693 20:13:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.693 20:13:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:17:36.693 20:13:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:17:36.693 20:13:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:36.693 20:13:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.693 20:13:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:36.693 [2024-12-08 20:13:08.623814] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:36.693 [2024-12-08 20:13:08.623898] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:36.693 [2024-12-08 20:13:08.623930] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:36.693 [2024-12-08 20:13:08.624010] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:36.693 [2024-12-08 20:13:08.625844] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:36.693 [2024-12-08 20:13:08.625910] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:36.693 [2024-12-08 20:13:08.625988] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:36.693 [2024-12-08 20:13:08.626079] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:36.693 pt1 00:17:36.693 20:13:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.693 20:13:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:17:36.693 20:13:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:36.693 20:13:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:36.693 20:13:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:36.693 20:13:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:36.693 20:13:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:36.693 20:13:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:36.693 20:13:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:36.693 20:13:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:36.693 20:13:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:36.693 20:13:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:36.693 20:13:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:36.693 20:13:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.693 20:13:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:36.693 20:13:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.954 20:13:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:36.954 "name": "raid_bdev1", 00:17:36.954 "uuid": "7409e3c7-6ae1-45ed-9bbb-da545c1c37a2", 00:17:36.954 "strip_size_kb": 0, 00:17:36.954 "state": "configuring", 00:17:36.954 "raid_level": "raid1", 00:17:36.954 "superblock": true, 00:17:36.954 "num_base_bdevs": 2, 00:17:36.954 "num_base_bdevs_discovered": 1, 00:17:36.954 "num_base_bdevs_operational": 2, 00:17:36.954 "base_bdevs_list": [ 00:17:36.954 { 00:17:36.954 "name": "pt1", 00:17:36.954 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:36.954 "is_configured": true, 00:17:36.954 "data_offset": 256, 00:17:36.954 "data_size": 7936 00:17:36.954 }, 00:17:36.954 { 00:17:36.954 "name": null, 00:17:36.954 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:36.954 "is_configured": false, 00:17:36.954 "data_offset": 256, 00:17:36.954 "data_size": 7936 00:17:36.954 } 00:17:36.954 ] 00:17:36.954 }' 00:17:36.954 20:13:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:36.954 20:13:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:37.215 20:13:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:17:37.215 20:13:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:17:37.215 20:13:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:37.215 20:13:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:37.215 20:13:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.215 20:13:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:37.215 [2024-12-08 20:13:09.075071] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:37.215 [2024-12-08 20:13:09.075139] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:37.215 [2024-12-08 20:13:09.075160] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:17:37.215 [2024-12-08 20:13:09.075171] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:37.215 [2024-12-08 20:13:09.075339] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:37.215 [2024-12-08 20:13:09.075354] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:37.215 [2024-12-08 20:13:09.075402] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:37.215 [2024-12-08 20:13:09.075422] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:37.215 [2024-12-08 20:13:09.075504] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:37.215 [2024-12-08 20:13:09.075515] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:17:37.215 [2024-12-08 20:13:09.075613] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:37.215 [2024-12-08 20:13:09.075679] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:37.215 [2024-12-08 20:13:09.075687] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:17:37.215 [2024-12-08 20:13:09.075758] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:37.215 pt2 00:17:37.215 20:13:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.215 20:13:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:37.215 20:13:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:37.215 20:13:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:37.215 20:13:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:37.215 20:13:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:37.215 20:13:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:37.215 20:13:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:37.215 20:13:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:37.215 20:13:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:37.215 20:13:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:37.215 20:13:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:37.215 20:13:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:37.215 20:13:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:37.215 20:13:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.215 20:13:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:37.215 20:13:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:37.215 20:13:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.215 20:13:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:37.215 "name": "raid_bdev1", 00:17:37.215 "uuid": "7409e3c7-6ae1-45ed-9bbb-da545c1c37a2", 00:17:37.215 "strip_size_kb": 0, 00:17:37.215 "state": "online", 00:17:37.215 "raid_level": "raid1", 00:17:37.215 "superblock": true, 00:17:37.215 "num_base_bdevs": 2, 00:17:37.215 "num_base_bdevs_discovered": 2, 00:17:37.215 "num_base_bdevs_operational": 2, 00:17:37.215 "base_bdevs_list": [ 00:17:37.215 { 00:17:37.215 "name": "pt1", 00:17:37.215 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:37.215 "is_configured": true, 00:17:37.215 "data_offset": 256, 00:17:37.215 "data_size": 7936 00:17:37.215 }, 00:17:37.215 { 00:17:37.215 "name": "pt2", 00:17:37.215 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:37.215 "is_configured": true, 00:17:37.215 "data_offset": 256, 00:17:37.215 "data_size": 7936 00:17:37.215 } 00:17:37.215 ] 00:17:37.215 }' 00:17:37.215 20:13:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:37.215 20:13:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:37.783 20:13:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:17:37.783 20:13:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:37.783 20:13:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:37.783 20:13:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:37.783 20:13:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:17:37.783 20:13:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:37.783 20:13:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:37.783 20:13:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:37.783 20:13:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.783 20:13:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:37.783 [2024-12-08 20:13:09.554518] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:37.783 20:13:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.783 20:13:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:37.783 "name": "raid_bdev1", 00:17:37.783 "aliases": [ 00:17:37.783 "7409e3c7-6ae1-45ed-9bbb-da545c1c37a2" 00:17:37.783 ], 00:17:37.783 "product_name": "Raid Volume", 00:17:37.783 "block_size": 4128, 00:17:37.783 "num_blocks": 7936, 00:17:37.783 "uuid": "7409e3c7-6ae1-45ed-9bbb-da545c1c37a2", 00:17:37.783 "md_size": 32, 00:17:37.783 "md_interleave": true, 00:17:37.783 "dif_type": 0, 00:17:37.783 "assigned_rate_limits": { 00:17:37.783 "rw_ios_per_sec": 0, 00:17:37.783 "rw_mbytes_per_sec": 0, 00:17:37.783 "r_mbytes_per_sec": 0, 00:17:37.783 "w_mbytes_per_sec": 0 00:17:37.783 }, 00:17:37.783 "claimed": false, 00:17:37.783 "zoned": false, 00:17:37.783 "supported_io_types": { 00:17:37.783 "read": true, 00:17:37.783 "write": true, 00:17:37.783 "unmap": false, 00:17:37.783 "flush": false, 00:17:37.783 "reset": true, 00:17:37.783 "nvme_admin": false, 00:17:37.783 "nvme_io": false, 00:17:37.783 "nvme_io_md": false, 00:17:37.783 "write_zeroes": true, 00:17:37.783 "zcopy": false, 00:17:37.783 "get_zone_info": false, 00:17:37.783 "zone_management": false, 00:17:37.783 "zone_append": false, 00:17:37.783 "compare": false, 00:17:37.783 "compare_and_write": false, 00:17:37.783 "abort": false, 00:17:37.783 "seek_hole": false, 00:17:37.783 "seek_data": false, 00:17:37.783 "copy": false, 00:17:37.783 "nvme_iov_md": false 00:17:37.783 }, 00:17:37.783 "memory_domains": [ 00:17:37.783 { 00:17:37.783 "dma_device_id": "system", 00:17:37.783 "dma_device_type": 1 00:17:37.783 }, 00:17:37.783 { 00:17:37.783 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:37.783 "dma_device_type": 2 00:17:37.783 }, 00:17:37.783 { 00:17:37.783 "dma_device_id": "system", 00:17:37.783 "dma_device_type": 1 00:17:37.783 }, 00:17:37.783 { 00:17:37.783 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:37.783 "dma_device_type": 2 00:17:37.783 } 00:17:37.783 ], 00:17:37.783 "driver_specific": { 00:17:37.783 "raid": { 00:17:37.783 "uuid": "7409e3c7-6ae1-45ed-9bbb-da545c1c37a2", 00:17:37.783 "strip_size_kb": 0, 00:17:37.783 "state": "online", 00:17:37.783 "raid_level": "raid1", 00:17:37.783 "superblock": true, 00:17:37.783 "num_base_bdevs": 2, 00:17:37.783 "num_base_bdevs_discovered": 2, 00:17:37.783 "num_base_bdevs_operational": 2, 00:17:37.783 "base_bdevs_list": [ 00:17:37.783 { 00:17:37.783 "name": "pt1", 00:17:37.783 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:37.783 "is_configured": true, 00:17:37.783 "data_offset": 256, 00:17:37.784 "data_size": 7936 00:17:37.784 }, 00:17:37.784 { 00:17:37.784 "name": "pt2", 00:17:37.784 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:37.784 "is_configured": true, 00:17:37.784 "data_offset": 256, 00:17:37.784 "data_size": 7936 00:17:37.784 } 00:17:37.784 ] 00:17:37.784 } 00:17:37.784 } 00:17:37.784 }' 00:17:37.784 20:13:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:37.784 20:13:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:37.784 pt2' 00:17:37.784 20:13:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:37.784 20:13:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:17:37.784 20:13:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:37.784 20:13:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:37.784 20:13:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:37.784 20:13:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.784 20:13:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:37.784 20:13:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.784 20:13:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:17:37.784 20:13:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:17:37.784 20:13:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:37.784 20:13:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:37.784 20:13:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.784 20:13:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:37.784 20:13:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:37.784 20:13:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.784 20:13:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:17:37.784 20:13:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:17:37.784 20:13:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:37.784 20:13:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:17:37.784 20:13:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.784 20:13:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:38.042 [2024-12-08 20:13:09.762145] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:38.042 20:13:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.042 20:13:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # '[' 7409e3c7-6ae1-45ed-9bbb-da545c1c37a2 '!=' 7409e3c7-6ae1-45ed-9bbb-da545c1c37a2 ']' 00:17:38.042 20:13:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:17:38.042 20:13:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:38.042 20:13:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:17:38.042 20:13:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:17:38.042 20:13:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.042 20:13:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:38.042 [2024-12-08 20:13:09.809831] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:17:38.042 20:13:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.042 20:13:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:38.042 20:13:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:38.042 20:13:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:38.042 20:13:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:38.042 20:13:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:38.042 20:13:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:38.042 20:13:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:38.042 20:13:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:38.042 20:13:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:38.042 20:13:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:38.042 20:13:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:38.042 20:13:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.042 20:13:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:38.042 20:13:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:38.042 20:13:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.042 20:13:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:38.042 "name": "raid_bdev1", 00:17:38.042 "uuid": "7409e3c7-6ae1-45ed-9bbb-da545c1c37a2", 00:17:38.042 "strip_size_kb": 0, 00:17:38.042 "state": "online", 00:17:38.042 "raid_level": "raid1", 00:17:38.042 "superblock": true, 00:17:38.042 "num_base_bdevs": 2, 00:17:38.042 "num_base_bdevs_discovered": 1, 00:17:38.042 "num_base_bdevs_operational": 1, 00:17:38.042 "base_bdevs_list": [ 00:17:38.042 { 00:17:38.042 "name": null, 00:17:38.042 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:38.042 "is_configured": false, 00:17:38.042 "data_offset": 0, 00:17:38.042 "data_size": 7936 00:17:38.042 }, 00:17:38.042 { 00:17:38.042 "name": "pt2", 00:17:38.042 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:38.042 "is_configured": true, 00:17:38.042 "data_offset": 256, 00:17:38.042 "data_size": 7936 00:17:38.042 } 00:17:38.042 ] 00:17:38.042 }' 00:17:38.042 20:13:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:38.042 20:13:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:38.300 20:13:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:38.301 20:13:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.301 20:13:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:38.301 [2024-12-08 20:13:10.225095] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:38.301 [2024-12-08 20:13:10.225159] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:38.301 [2024-12-08 20:13:10.225267] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:38.301 [2024-12-08 20:13:10.225347] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:38.301 [2024-12-08 20:13:10.225408] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:17:38.301 20:13:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.301 20:13:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:17:38.301 20:13:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:38.301 20:13:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.301 20:13:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:38.301 20:13:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.301 20:13:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:17:38.301 20:13:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:17:38.301 20:13:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:17:38.301 20:13:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:38.301 20:13:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:17:38.301 20:13:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.301 20:13:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:38.560 20:13:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.560 20:13:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:17:38.560 20:13:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:38.560 20:13:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:17:38.560 20:13:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:17:38.560 20:13:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # i=1 00:17:38.560 20:13:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:38.560 20:13:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.560 20:13:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:38.560 [2024-12-08 20:13:10.285029] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:38.560 [2024-12-08 20:13:10.285112] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:38.560 [2024-12-08 20:13:10.285161] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:17:38.560 [2024-12-08 20:13:10.285193] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:38.560 [2024-12-08 20:13:10.287046] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:38.560 [2024-12-08 20:13:10.287114] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:38.560 [2024-12-08 20:13:10.287190] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:38.560 [2024-12-08 20:13:10.287272] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:38.560 [2024-12-08 20:13:10.287379] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:17:38.560 [2024-12-08 20:13:10.287418] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:17:38.560 [2024-12-08 20:13:10.287519] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:38.560 [2024-12-08 20:13:10.287615] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:17:38.560 [2024-12-08 20:13:10.287623] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:17:38.560 [2024-12-08 20:13:10.287684] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:38.560 pt2 00:17:38.560 20:13:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.560 20:13:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:38.560 20:13:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:38.560 20:13:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:38.560 20:13:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:38.560 20:13:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:38.560 20:13:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:38.560 20:13:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:38.560 20:13:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:38.560 20:13:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:38.560 20:13:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:38.560 20:13:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:38.560 20:13:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:38.560 20:13:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.560 20:13:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:38.560 20:13:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.560 20:13:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:38.560 "name": "raid_bdev1", 00:17:38.560 "uuid": "7409e3c7-6ae1-45ed-9bbb-da545c1c37a2", 00:17:38.560 "strip_size_kb": 0, 00:17:38.560 "state": "online", 00:17:38.560 "raid_level": "raid1", 00:17:38.560 "superblock": true, 00:17:38.560 "num_base_bdevs": 2, 00:17:38.560 "num_base_bdevs_discovered": 1, 00:17:38.560 "num_base_bdevs_operational": 1, 00:17:38.560 "base_bdevs_list": [ 00:17:38.560 { 00:17:38.560 "name": null, 00:17:38.560 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:38.560 "is_configured": false, 00:17:38.560 "data_offset": 256, 00:17:38.560 "data_size": 7936 00:17:38.560 }, 00:17:38.560 { 00:17:38.560 "name": "pt2", 00:17:38.560 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:38.560 "is_configured": true, 00:17:38.560 "data_offset": 256, 00:17:38.560 "data_size": 7936 00:17:38.560 } 00:17:38.560 ] 00:17:38.560 }' 00:17:38.560 20:13:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:38.560 20:13:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:38.819 20:13:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:38.819 20:13:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.819 20:13:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:38.819 [2024-12-08 20:13:10.728247] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:38.819 [2024-12-08 20:13:10.728330] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:38.819 [2024-12-08 20:13:10.728439] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:38.819 [2024-12-08 20:13:10.728558] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:38.819 [2024-12-08 20:13:10.728605] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:17:38.819 20:13:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.819 20:13:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:38.819 20:13:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.819 20:13:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:38.819 20:13:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:17:38.819 20:13:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.819 20:13:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:17:38.819 20:13:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:17:38.819 20:13:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:17:38.819 20:13:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:38.819 20:13:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.819 20:13:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:38.819 [2024-12-08 20:13:10.776172] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:38.819 [2024-12-08 20:13:10.776277] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:38.819 [2024-12-08 20:13:10.776313] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:17:38.819 [2024-12-08 20:13:10.776339] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:38.819 [2024-12-08 20:13:10.778244] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:38.819 [2024-12-08 20:13:10.778316] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:38.819 [2024-12-08 20:13:10.778391] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:38.819 [2024-12-08 20:13:10.778480] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:38.819 [2024-12-08 20:13:10.778655] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:17:38.819 [2024-12-08 20:13:10.778715] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:38.819 [2024-12-08 20:13:10.778771] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:17:38.820 [2024-12-08 20:13:10.778901] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:38.820 [2024-12-08 20:13:10.779049] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:17:38.820 [2024-12-08 20:13:10.779090] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:17:38.820 [2024-12-08 20:13:10.779195] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:38.820 [2024-12-08 20:13:10.779300] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:17:38.820 [2024-12-08 20:13:10.779338] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:17:38.820 [2024-12-08 20:13:10.779471] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:38.820 pt1 00:17:38.820 20:13:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.820 20:13:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:17:38.820 20:13:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:38.820 20:13:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:38.820 20:13:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:38.820 20:13:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:38.820 20:13:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:38.820 20:13:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:38.820 20:13:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:38.820 20:13:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:38.820 20:13:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:38.820 20:13:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:38.820 20:13:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:38.820 20:13:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:38.820 20:13:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.820 20:13:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:39.078 20:13:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.078 20:13:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:39.078 "name": "raid_bdev1", 00:17:39.078 "uuid": "7409e3c7-6ae1-45ed-9bbb-da545c1c37a2", 00:17:39.078 "strip_size_kb": 0, 00:17:39.078 "state": "online", 00:17:39.078 "raid_level": "raid1", 00:17:39.078 "superblock": true, 00:17:39.078 "num_base_bdevs": 2, 00:17:39.078 "num_base_bdevs_discovered": 1, 00:17:39.078 "num_base_bdevs_operational": 1, 00:17:39.078 "base_bdevs_list": [ 00:17:39.078 { 00:17:39.078 "name": null, 00:17:39.078 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:39.078 "is_configured": false, 00:17:39.078 "data_offset": 256, 00:17:39.078 "data_size": 7936 00:17:39.078 }, 00:17:39.078 { 00:17:39.078 "name": "pt2", 00:17:39.078 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:39.078 "is_configured": true, 00:17:39.078 "data_offset": 256, 00:17:39.078 "data_size": 7936 00:17:39.078 } 00:17:39.078 ] 00:17:39.078 }' 00:17:39.078 20:13:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:39.078 20:13:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:39.337 20:13:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:17:39.337 20:13:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:17:39.337 20:13:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.337 20:13:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:39.337 20:13:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.337 20:13:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:17:39.337 20:13:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:39.337 20:13:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.337 20:13:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:39.337 20:13:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:17:39.337 [2024-12-08 20:13:11.263586] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:39.337 20:13:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.337 20:13:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # '[' 7409e3c7-6ae1-45ed-9bbb-da545c1c37a2 '!=' 7409e3c7-6ae1-45ed-9bbb-da545c1c37a2 ']' 00:17:39.337 20:13:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@563 -- # killprocess 88329 00:17:39.337 20:13:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 88329 ']' 00:17:39.337 20:13:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 88329 00:17:39.337 20:13:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:17:39.337 20:13:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:39.337 20:13:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88329 00:17:39.596 20:13:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:39.596 20:13:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:39.596 20:13:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88329' 00:17:39.596 killing process with pid 88329 00:17:39.596 20:13:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@973 -- # kill 88329 00:17:39.596 [2024-12-08 20:13:11.341335] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:39.596 [2024-12-08 20:13:11.341414] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:39.596 [2024-12-08 20:13:11.341459] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:39.596 [2024-12-08 20:13:11.341472] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:17:39.596 20:13:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@978 -- # wait 88329 00:17:39.596 [2024-12-08 20:13:11.539120] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:40.973 ************************************ 00:17:40.973 END TEST raid_superblock_test_md_interleaved 00:17:40.973 ************************************ 00:17:40.973 20:13:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@565 -- # return 0 00:17:40.973 00:17:40.973 real 0m5.974s 00:17:40.973 user 0m9.126s 00:17:40.973 sys 0m0.987s 00:17:40.973 20:13:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:40.973 20:13:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:40.973 20:13:12 bdev_raid -- bdev/bdev_raid.sh@1013 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:17:40.973 20:13:12 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:17:40.973 20:13:12 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:40.973 20:13:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:40.973 ************************************ 00:17:40.973 START TEST raid_rebuild_test_sb_md_interleaved 00:17:40.973 ************************************ 00:17:40.973 20:13:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false false 00:17:40.973 20:13:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:17:40.973 20:13:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:17:40.973 20:13:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:17:40.973 20:13:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:17:40.973 20:13:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # local verify=false 00:17:40.973 20:13:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:40.973 20:13:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:40.973 20:13:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:40.973 20:13:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:40.973 20:13:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:40.973 20:13:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:40.973 20:13:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:40.973 20:13:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:40.973 20:13:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:40.973 20:13:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:40.973 20:13:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:40.973 20:13:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:40.973 20:13:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:40.973 20:13:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:40.973 20:13:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:40.973 20:13:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:17:40.973 20:13:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:17:40.973 20:13:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:17:40.973 20:13:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:17:40.973 20:13:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # raid_pid=88652 00:17:40.973 20:13:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:40.973 20:13:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@598 -- # waitforlisten 88652 00:17:40.973 20:13:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 88652 ']' 00:17:40.973 20:13:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:40.973 20:13:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:40.973 20:13:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:40.973 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:40.973 20:13:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:40.973 20:13:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:40.973 [2024-12-08 20:13:12.795585] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:17:40.973 [2024-12-08 20:13:12.795772] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:17:40.973 Zero copy mechanism will not be used. 00:17:40.973 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88652 ] 00:17:41.230 [2024-12-08 20:13:12.972244] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:41.230 [2024-12-08 20:13:13.076930] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:41.488 [2024-12-08 20:13:13.271264] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:41.488 [2024-12-08 20:13:13.271384] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:41.746 20:13:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:41.746 20:13:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:17:41.746 20:13:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:41.746 20:13:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:17:41.746 20:13:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.746 20:13:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:41.746 BaseBdev1_malloc 00:17:41.746 20:13:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.746 20:13:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:41.746 20:13:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.746 20:13:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:41.746 [2024-12-08 20:13:13.665492] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:41.746 [2024-12-08 20:13:13.665605] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:41.746 [2024-12-08 20:13:13.665646] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:41.746 [2024-12-08 20:13:13.665677] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:41.746 [2024-12-08 20:13:13.667504] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:41.746 [2024-12-08 20:13:13.667582] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:41.746 BaseBdev1 00:17:41.746 20:13:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.746 20:13:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:41.746 20:13:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:17:41.746 20:13:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.746 20:13:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:41.746 BaseBdev2_malloc 00:17:41.746 20:13:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.746 20:13:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:41.746 20:13:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.746 20:13:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:41.746 [2024-12-08 20:13:13.719191] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:41.746 [2024-12-08 20:13:13.719280] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:41.746 [2024-12-08 20:13:13.719319] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:41.746 [2024-12-08 20:13:13.719350] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:41.746 [2024-12-08 20:13:13.721228] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:41.746 [2024-12-08 20:13:13.721311] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:42.005 BaseBdev2 00:17:42.005 20:13:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.005 20:13:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:17:42.005 20:13:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.005 20:13:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:42.005 spare_malloc 00:17:42.005 20:13:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.005 20:13:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:42.005 20:13:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.005 20:13:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:42.005 spare_delay 00:17:42.005 20:13:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.005 20:13:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:42.005 20:13:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.005 20:13:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:42.005 [2024-12-08 20:13:13.797632] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:42.005 [2024-12-08 20:13:13.797685] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:42.005 [2024-12-08 20:13:13.797722] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:17:42.005 [2024-12-08 20:13:13.797732] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:42.005 [2024-12-08 20:13:13.799524] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:42.005 [2024-12-08 20:13:13.799617] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:42.005 spare 00:17:42.005 20:13:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.005 20:13:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:17:42.005 20:13:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.005 20:13:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:42.005 [2024-12-08 20:13:13.809667] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:42.005 [2024-12-08 20:13:13.811451] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:42.005 [2024-12-08 20:13:13.811649] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:42.005 [2024-12-08 20:13:13.811666] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:17:42.005 [2024-12-08 20:13:13.811731] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:42.005 [2024-12-08 20:13:13.811794] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:42.005 [2024-12-08 20:13:13.811802] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:42.005 [2024-12-08 20:13:13.811861] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:42.005 20:13:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.005 20:13:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:42.005 20:13:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:42.005 20:13:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:42.005 20:13:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:42.005 20:13:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:42.005 20:13:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:42.005 20:13:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:42.005 20:13:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:42.005 20:13:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:42.005 20:13:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:42.005 20:13:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:42.005 20:13:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.005 20:13:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:42.005 20:13:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:42.005 20:13:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.005 20:13:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:42.005 "name": "raid_bdev1", 00:17:42.005 "uuid": "20670f02-f7f3-471f-980a-8cd27c8e1fed", 00:17:42.005 "strip_size_kb": 0, 00:17:42.005 "state": "online", 00:17:42.005 "raid_level": "raid1", 00:17:42.005 "superblock": true, 00:17:42.005 "num_base_bdevs": 2, 00:17:42.005 "num_base_bdevs_discovered": 2, 00:17:42.005 "num_base_bdevs_operational": 2, 00:17:42.005 "base_bdevs_list": [ 00:17:42.005 { 00:17:42.005 "name": "BaseBdev1", 00:17:42.005 "uuid": "37ee7ee1-2557-51ee-9af1-054ae09b44e4", 00:17:42.005 "is_configured": true, 00:17:42.005 "data_offset": 256, 00:17:42.005 "data_size": 7936 00:17:42.005 }, 00:17:42.005 { 00:17:42.005 "name": "BaseBdev2", 00:17:42.005 "uuid": "6b077bf8-680b-5eee-a9ac-17b8621f14a7", 00:17:42.005 "is_configured": true, 00:17:42.005 "data_offset": 256, 00:17:42.005 "data_size": 7936 00:17:42.005 } 00:17:42.005 ] 00:17:42.005 }' 00:17:42.005 20:13:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:42.005 20:13:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:42.571 20:13:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:42.571 20:13:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:42.571 20:13:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.571 20:13:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:42.571 [2024-12-08 20:13:14.257241] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:42.571 20:13:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.571 20:13:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:17:42.571 20:13:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:42.571 20:13:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:42.571 20:13:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.571 20:13:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:42.571 20:13:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.571 20:13:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:17:42.571 20:13:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:17:42.571 20:13:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@624 -- # '[' false = true ']' 00:17:42.571 20:13:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:42.571 20:13:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.571 20:13:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:42.571 [2024-12-08 20:13:14.352741] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:42.571 20:13:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.571 20:13:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:42.571 20:13:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:42.571 20:13:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:42.571 20:13:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:42.571 20:13:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:42.571 20:13:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:42.571 20:13:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:42.571 20:13:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:42.571 20:13:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:42.571 20:13:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:42.571 20:13:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:42.571 20:13:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:42.571 20:13:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.571 20:13:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:42.571 20:13:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.571 20:13:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:42.571 "name": "raid_bdev1", 00:17:42.571 "uuid": "20670f02-f7f3-471f-980a-8cd27c8e1fed", 00:17:42.571 "strip_size_kb": 0, 00:17:42.571 "state": "online", 00:17:42.571 "raid_level": "raid1", 00:17:42.571 "superblock": true, 00:17:42.571 "num_base_bdevs": 2, 00:17:42.571 "num_base_bdevs_discovered": 1, 00:17:42.571 "num_base_bdevs_operational": 1, 00:17:42.571 "base_bdevs_list": [ 00:17:42.571 { 00:17:42.571 "name": null, 00:17:42.571 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:42.571 "is_configured": false, 00:17:42.571 "data_offset": 0, 00:17:42.571 "data_size": 7936 00:17:42.571 }, 00:17:42.571 { 00:17:42.571 "name": "BaseBdev2", 00:17:42.571 "uuid": "6b077bf8-680b-5eee-a9ac-17b8621f14a7", 00:17:42.571 "is_configured": true, 00:17:42.571 "data_offset": 256, 00:17:42.571 "data_size": 7936 00:17:42.571 } 00:17:42.571 ] 00:17:42.571 }' 00:17:42.571 20:13:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:42.571 20:13:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:42.830 20:13:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:42.830 20:13:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.830 20:13:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:42.830 [2024-12-08 20:13:14.760064] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:42.830 [2024-12-08 20:13:14.776959] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:42.830 20:13:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.830 20:13:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:42.830 [2024-12-08 20:13:14.778776] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:44.211 20:13:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:44.211 20:13:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:44.211 20:13:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:44.211 20:13:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:44.211 20:13:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:44.211 20:13:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:44.211 20:13:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:44.211 20:13:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.211 20:13:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:44.211 20:13:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.211 20:13:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:44.211 "name": "raid_bdev1", 00:17:44.211 "uuid": "20670f02-f7f3-471f-980a-8cd27c8e1fed", 00:17:44.211 "strip_size_kb": 0, 00:17:44.211 "state": "online", 00:17:44.211 "raid_level": "raid1", 00:17:44.211 "superblock": true, 00:17:44.211 "num_base_bdevs": 2, 00:17:44.211 "num_base_bdevs_discovered": 2, 00:17:44.211 "num_base_bdevs_operational": 2, 00:17:44.211 "process": { 00:17:44.211 "type": "rebuild", 00:17:44.211 "target": "spare", 00:17:44.211 "progress": { 00:17:44.211 "blocks": 2560, 00:17:44.211 "percent": 32 00:17:44.211 } 00:17:44.211 }, 00:17:44.211 "base_bdevs_list": [ 00:17:44.211 { 00:17:44.211 "name": "spare", 00:17:44.211 "uuid": "b00a65c9-2be0-54fd-98ee-6510c223ead2", 00:17:44.211 "is_configured": true, 00:17:44.211 "data_offset": 256, 00:17:44.211 "data_size": 7936 00:17:44.211 }, 00:17:44.211 { 00:17:44.211 "name": "BaseBdev2", 00:17:44.211 "uuid": "6b077bf8-680b-5eee-a9ac-17b8621f14a7", 00:17:44.211 "is_configured": true, 00:17:44.211 "data_offset": 256, 00:17:44.211 "data_size": 7936 00:17:44.211 } 00:17:44.211 ] 00:17:44.211 }' 00:17:44.211 20:13:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:44.211 20:13:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:44.211 20:13:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:44.211 20:13:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:44.211 20:13:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:44.211 20:13:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.211 20:13:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:44.211 [2024-12-08 20:13:15.930191] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:44.211 [2024-12-08 20:13:15.983643] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:44.211 [2024-12-08 20:13:15.983759] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:44.211 [2024-12-08 20:13:15.983795] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:44.211 [2024-12-08 20:13:15.983822] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:44.211 20:13:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.211 20:13:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:44.211 20:13:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:44.211 20:13:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:44.211 20:13:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:44.211 20:13:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:44.211 20:13:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:44.211 20:13:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:44.211 20:13:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:44.211 20:13:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:44.211 20:13:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:44.211 20:13:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:44.211 20:13:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:44.211 20:13:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.211 20:13:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:44.211 20:13:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.211 20:13:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:44.211 "name": "raid_bdev1", 00:17:44.211 "uuid": "20670f02-f7f3-471f-980a-8cd27c8e1fed", 00:17:44.211 "strip_size_kb": 0, 00:17:44.211 "state": "online", 00:17:44.211 "raid_level": "raid1", 00:17:44.211 "superblock": true, 00:17:44.211 "num_base_bdevs": 2, 00:17:44.211 "num_base_bdevs_discovered": 1, 00:17:44.211 "num_base_bdevs_operational": 1, 00:17:44.211 "base_bdevs_list": [ 00:17:44.211 { 00:17:44.211 "name": null, 00:17:44.211 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:44.211 "is_configured": false, 00:17:44.211 "data_offset": 0, 00:17:44.211 "data_size": 7936 00:17:44.211 }, 00:17:44.211 { 00:17:44.211 "name": "BaseBdev2", 00:17:44.211 "uuid": "6b077bf8-680b-5eee-a9ac-17b8621f14a7", 00:17:44.211 "is_configured": true, 00:17:44.211 "data_offset": 256, 00:17:44.211 "data_size": 7936 00:17:44.211 } 00:17:44.211 ] 00:17:44.211 }' 00:17:44.211 20:13:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:44.211 20:13:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:44.779 20:13:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:44.779 20:13:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:44.779 20:13:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:44.779 20:13:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:44.779 20:13:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:44.779 20:13:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:44.779 20:13:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:44.779 20:13:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.779 20:13:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:44.779 20:13:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.779 20:13:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:44.779 "name": "raid_bdev1", 00:17:44.779 "uuid": "20670f02-f7f3-471f-980a-8cd27c8e1fed", 00:17:44.779 "strip_size_kb": 0, 00:17:44.779 "state": "online", 00:17:44.779 "raid_level": "raid1", 00:17:44.779 "superblock": true, 00:17:44.779 "num_base_bdevs": 2, 00:17:44.779 "num_base_bdevs_discovered": 1, 00:17:44.779 "num_base_bdevs_operational": 1, 00:17:44.779 "base_bdevs_list": [ 00:17:44.779 { 00:17:44.779 "name": null, 00:17:44.779 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:44.779 "is_configured": false, 00:17:44.779 "data_offset": 0, 00:17:44.779 "data_size": 7936 00:17:44.779 }, 00:17:44.779 { 00:17:44.779 "name": "BaseBdev2", 00:17:44.779 "uuid": "6b077bf8-680b-5eee-a9ac-17b8621f14a7", 00:17:44.779 "is_configured": true, 00:17:44.779 "data_offset": 256, 00:17:44.780 "data_size": 7936 00:17:44.780 } 00:17:44.780 ] 00:17:44.780 }' 00:17:44.780 20:13:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:44.780 20:13:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:44.780 20:13:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:44.780 20:13:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:44.780 20:13:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:44.780 20:13:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.780 20:13:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:44.780 [2024-12-08 20:13:16.596994] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:44.780 [2024-12-08 20:13:16.613239] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:44.780 20:13:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.780 20:13:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:44.780 [2024-12-08 20:13:16.615022] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:45.720 20:13:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:45.720 20:13:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:45.720 20:13:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:45.720 20:13:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:45.720 20:13:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:45.720 20:13:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:45.720 20:13:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.720 20:13:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:45.720 20:13:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:45.720 20:13:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.720 20:13:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:45.720 "name": "raid_bdev1", 00:17:45.720 "uuid": "20670f02-f7f3-471f-980a-8cd27c8e1fed", 00:17:45.720 "strip_size_kb": 0, 00:17:45.720 "state": "online", 00:17:45.720 "raid_level": "raid1", 00:17:45.720 "superblock": true, 00:17:45.720 "num_base_bdevs": 2, 00:17:45.720 "num_base_bdevs_discovered": 2, 00:17:45.720 "num_base_bdevs_operational": 2, 00:17:45.720 "process": { 00:17:45.720 "type": "rebuild", 00:17:45.720 "target": "spare", 00:17:45.720 "progress": { 00:17:45.720 "blocks": 2560, 00:17:45.720 "percent": 32 00:17:45.720 } 00:17:45.720 }, 00:17:45.720 "base_bdevs_list": [ 00:17:45.720 { 00:17:45.720 "name": "spare", 00:17:45.720 "uuid": "b00a65c9-2be0-54fd-98ee-6510c223ead2", 00:17:45.720 "is_configured": true, 00:17:45.720 "data_offset": 256, 00:17:45.720 "data_size": 7936 00:17:45.720 }, 00:17:45.721 { 00:17:45.721 "name": "BaseBdev2", 00:17:45.721 "uuid": "6b077bf8-680b-5eee-a9ac-17b8621f14a7", 00:17:45.721 "is_configured": true, 00:17:45.721 "data_offset": 256, 00:17:45.721 "data_size": 7936 00:17:45.721 } 00:17:45.721 ] 00:17:45.721 }' 00:17:45.721 20:13:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:45.721 20:13:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:45.981 20:13:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:45.981 20:13:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:45.981 20:13:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:17:45.981 20:13:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:17:45.981 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:17:45.981 20:13:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:17:45.981 20:13:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:17:45.981 20:13:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:17:45.981 20:13:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # local timeout=719 00:17:45.981 20:13:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:45.981 20:13:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:45.981 20:13:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:45.981 20:13:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:45.981 20:13:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:45.981 20:13:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:45.981 20:13:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:45.981 20:13:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:45.981 20:13:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.981 20:13:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:45.981 20:13:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.981 20:13:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:45.981 "name": "raid_bdev1", 00:17:45.981 "uuid": "20670f02-f7f3-471f-980a-8cd27c8e1fed", 00:17:45.981 "strip_size_kb": 0, 00:17:45.981 "state": "online", 00:17:45.981 "raid_level": "raid1", 00:17:45.981 "superblock": true, 00:17:45.981 "num_base_bdevs": 2, 00:17:45.981 "num_base_bdevs_discovered": 2, 00:17:45.981 "num_base_bdevs_operational": 2, 00:17:45.981 "process": { 00:17:45.981 "type": "rebuild", 00:17:45.981 "target": "spare", 00:17:45.981 "progress": { 00:17:45.981 "blocks": 2816, 00:17:45.981 "percent": 35 00:17:45.981 } 00:17:45.981 }, 00:17:45.981 "base_bdevs_list": [ 00:17:45.981 { 00:17:45.981 "name": "spare", 00:17:45.981 "uuid": "b00a65c9-2be0-54fd-98ee-6510c223ead2", 00:17:45.981 "is_configured": true, 00:17:45.981 "data_offset": 256, 00:17:45.981 "data_size": 7936 00:17:45.981 }, 00:17:45.981 { 00:17:45.981 "name": "BaseBdev2", 00:17:45.981 "uuid": "6b077bf8-680b-5eee-a9ac-17b8621f14a7", 00:17:45.981 "is_configured": true, 00:17:45.981 "data_offset": 256, 00:17:45.981 "data_size": 7936 00:17:45.981 } 00:17:45.981 ] 00:17:45.981 }' 00:17:45.981 20:13:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:45.981 20:13:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:45.981 20:13:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:45.981 20:13:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:45.981 20:13:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:46.921 20:13:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:46.921 20:13:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:46.921 20:13:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:46.921 20:13:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:46.921 20:13:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:46.921 20:13:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:46.921 20:13:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:46.921 20:13:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:46.921 20:13:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.921 20:13:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:47.181 20:13:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.181 20:13:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:47.181 "name": "raid_bdev1", 00:17:47.181 "uuid": "20670f02-f7f3-471f-980a-8cd27c8e1fed", 00:17:47.181 "strip_size_kb": 0, 00:17:47.181 "state": "online", 00:17:47.181 "raid_level": "raid1", 00:17:47.181 "superblock": true, 00:17:47.181 "num_base_bdevs": 2, 00:17:47.181 "num_base_bdevs_discovered": 2, 00:17:47.181 "num_base_bdevs_operational": 2, 00:17:47.181 "process": { 00:17:47.181 "type": "rebuild", 00:17:47.181 "target": "spare", 00:17:47.181 "progress": { 00:17:47.181 "blocks": 5632, 00:17:47.181 "percent": 70 00:17:47.181 } 00:17:47.181 }, 00:17:47.181 "base_bdevs_list": [ 00:17:47.181 { 00:17:47.181 "name": "spare", 00:17:47.181 "uuid": "b00a65c9-2be0-54fd-98ee-6510c223ead2", 00:17:47.181 "is_configured": true, 00:17:47.181 "data_offset": 256, 00:17:47.181 "data_size": 7936 00:17:47.181 }, 00:17:47.181 { 00:17:47.181 "name": "BaseBdev2", 00:17:47.181 "uuid": "6b077bf8-680b-5eee-a9ac-17b8621f14a7", 00:17:47.181 "is_configured": true, 00:17:47.181 "data_offset": 256, 00:17:47.181 "data_size": 7936 00:17:47.181 } 00:17:47.181 ] 00:17:47.181 }' 00:17:47.181 20:13:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:47.181 20:13:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:47.181 20:13:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:47.181 20:13:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:47.181 20:13:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:48.121 [2024-12-08 20:13:19.727038] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:48.121 [2024-12-08 20:13:19.727102] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:48.121 [2024-12-08 20:13:19.727208] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:48.121 20:13:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:48.121 20:13:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:48.121 20:13:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:48.121 20:13:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:48.121 20:13:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:48.121 20:13:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:48.121 20:13:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:48.121 20:13:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.121 20:13:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:48.121 20:13:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:48.121 20:13:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.121 20:13:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:48.121 "name": "raid_bdev1", 00:17:48.121 "uuid": "20670f02-f7f3-471f-980a-8cd27c8e1fed", 00:17:48.121 "strip_size_kb": 0, 00:17:48.122 "state": "online", 00:17:48.122 "raid_level": "raid1", 00:17:48.122 "superblock": true, 00:17:48.122 "num_base_bdevs": 2, 00:17:48.122 "num_base_bdevs_discovered": 2, 00:17:48.122 "num_base_bdevs_operational": 2, 00:17:48.122 "base_bdevs_list": [ 00:17:48.122 { 00:17:48.122 "name": "spare", 00:17:48.122 "uuid": "b00a65c9-2be0-54fd-98ee-6510c223ead2", 00:17:48.122 "is_configured": true, 00:17:48.122 "data_offset": 256, 00:17:48.122 "data_size": 7936 00:17:48.122 }, 00:17:48.122 { 00:17:48.122 "name": "BaseBdev2", 00:17:48.122 "uuid": "6b077bf8-680b-5eee-a9ac-17b8621f14a7", 00:17:48.122 "is_configured": true, 00:17:48.122 "data_offset": 256, 00:17:48.122 "data_size": 7936 00:17:48.122 } 00:17:48.122 ] 00:17:48.122 }' 00:17:48.122 20:13:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:48.382 20:13:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:48.382 20:13:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:48.382 20:13:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:48.382 20:13:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@709 -- # break 00:17:48.382 20:13:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:48.382 20:13:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:48.382 20:13:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:48.382 20:13:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:48.382 20:13:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:48.382 20:13:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:48.382 20:13:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:48.382 20:13:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.382 20:13:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:48.382 20:13:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.382 20:13:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:48.382 "name": "raid_bdev1", 00:17:48.382 "uuid": "20670f02-f7f3-471f-980a-8cd27c8e1fed", 00:17:48.382 "strip_size_kb": 0, 00:17:48.382 "state": "online", 00:17:48.382 "raid_level": "raid1", 00:17:48.382 "superblock": true, 00:17:48.382 "num_base_bdevs": 2, 00:17:48.382 "num_base_bdevs_discovered": 2, 00:17:48.382 "num_base_bdevs_operational": 2, 00:17:48.382 "base_bdevs_list": [ 00:17:48.382 { 00:17:48.382 "name": "spare", 00:17:48.382 "uuid": "b00a65c9-2be0-54fd-98ee-6510c223ead2", 00:17:48.382 "is_configured": true, 00:17:48.382 "data_offset": 256, 00:17:48.382 "data_size": 7936 00:17:48.382 }, 00:17:48.382 { 00:17:48.382 "name": "BaseBdev2", 00:17:48.382 "uuid": "6b077bf8-680b-5eee-a9ac-17b8621f14a7", 00:17:48.382 "is_configured": true, 00:17:48.382 "data_offset": 256, 00:17:48.382 "data_size": 7936 00:17:48.382 } 00:17:48.382 ] 00:17:48.382 }' 00:17:48.382 20:13:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:48.382 20:13:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:48.382 20:13:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:48.382 20:13:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:48.382 20:13:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:48.382 20:13:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:48.382 20:13:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:48.382 20:13:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:48.382 20:13:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:48.382 20:13:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:48.382 20:13:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:48.382 20:13:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:48.382 20:13:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:48.382 20:13:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:48.382 20:13:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:48.382 20:13:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.382 20:13:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:48.382 20:13:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:48.382 20:13:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.383 20:13:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:48.383 "name": "raid_bdev1", 00:17:48.383 "uuid": "20670f02-f7f3-471f-980a-8cd27c8e1fed", 00:17:48.383 "strip_size_kb": 0, 00:17:48.383 "state": "online", 00:17:48.383 "raid_level": "raid1", 00:17:48.383 "superblock": true, 00:17:48.383 "num_base_bdevs": 2, 00:17:48.383 "num_base_bdevs_discovered": 2, 00:17:48.383 "num_base_bdevs_operational": 2, 00:17:48.383 "base_bdevs_list": [ 00:17:48.383 { 00:17:48.383 "name": "spare", 00:17:48.383 "uuid": "b00a65c9-2be0-54fd-98ee-6510c223ead2", 00:17:48.383 "is_configured": true, 00:17:48.383 "data_offset": 256, 00:17:48.383 "data_size": 7936 00:17:48.383 }, 00:17:48.383 { 00:17:48.383 "name": "BaseBdev2", 00:17:48.383 "uuid": "6b077bf8-680b-5eee-a9ac-17b8621f14a7", 00:17:48.383 "is_configured": true, 00:17:48.383 "data_offset": 256, 00:17:48.383 "data_size": 7936 00:17:48.383 } 00:17:48.383 ] 00:17:48.383 }' 00:17:48.383 20:13:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:48.383 20:13:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:48.954 20:13:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:48.954 20:13:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.954 20:13:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:48.954 [2024-12-08 20:13:20.713288] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:48.954 [2024-12-08 20:13:20.713362] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:48.954 [2024-12-08 20:13:20.713463] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:48.954 [2024-12-08 20:13:20.713582] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:48.954 [2024-12-08 20:13:20.713630] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:48.954 20:13:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.954 20:13:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:48.954 20:13:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # jq length 00:17:48.954 20:13:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.954 20:13:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:48.954 20:13:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.954 20:13:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:48.954 20:13:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:17:48.954 20:13:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:17:48.954 20:13:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:17:48.954 20:13:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.954 20:13:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:48.954 20:13:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.954 20:13:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:48.954 20:13:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.954 20:13:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:48.954 [2024-12-08 20:13:20.785146] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:48.954 [2024-12-08 20:13:20.785198] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:48.954 [2024-12-08 20:13:20.785222] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:17:48.954 [2024-12-08 20:13:20.785232] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:48.954 [2024-12-08 20:13:20.787121] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:48.954 [2024-12-08 20:13:20.787154] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:48.954 [2024-12-08 20:13:20.787204] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:48.954 [2024-12-08 20:13:20.787254] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:48.954 [2024-12-08 20:13:20.787360] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:48.954 spare 00:17:48.954 20:13:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.954 20:13:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:17:48.954 20:13:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.954 20:13:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:48.954 [2024-12-08 20:13:20.887254] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:17:48.954 [2024-12-08 20:13:20.887282] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:17:48.954 [2024-12-08 20:13:20.887393] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:17:48.954 [2024-12-08 20:13:20.887493] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:17:48.954 [2024-12-08 20:13:20.887503] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:17:48.954 [2024-12-08 20:13:20.887606] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:48.954 20:13:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.954 20:13:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:48.954 20:13:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:48.954 20:13:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:48.954 20:13:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:48.954 20:13:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:48.954 20:13:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:48.954 20:13:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:48.954 20:13:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:48.955 20:13:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:48.955 20:13:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:48.955 20:13:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:48.955 20:13:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:48.955 20:13:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.955 20:13:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:48.955 20:13:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.215 20:13:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:49.215 "name": "raid_bdev1", 00:17:49.215 "uuid": "20670f02-f7f3-471f-980a-8cd27c8e1fed", 00:17:49.215 "strip_size_kb": 0, 00:17:49.215 "state": "online", 00:17:49.215 "raid_level": "raid1", 00:17:49.215 "superblock": true, 00:17:49.215 "num_base_bdevs": 2, 00:17:49.215 "num_base_bdevs_discovered": 2, 00:17:49.215 "num_base_bdevs_operational": 2, 00:17:49.215 "base_bdevs_list": [ 00:17:49.215 { 00:17:49.215 "name": "spare", 00:17:49.215 "uuid": "b00a65c9-2be0-54fd-98ee-6510c223ead2", 00:17:49.215 "is_configured": true, 00:17:49.215 "data_offset": 256, 00:17:49.215 "data_size": 7936 00:17:49.215 }, 00:17:49.215 { 00:17:49.215 "name": "BaseBdev2", 00:17:49.215 "uuid": "6b077bf8-680b-5eee-a9ac-17b8621f14a7", 00:17:49.215 "is_configured": true, 00:17:49.215 "data_offset": 256, 00:17:49.215 "data_size": 7936 00:17:49.215 } 00:17:49.215 ] 00:17:49.215 }' 00:17:49.215 20:13:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:49.215 20:13:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:49.475 20:13:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:49.475 20:13:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:49.475 20:13:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:49.475 20:13:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:49.475 20:13:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:49.475 20:13:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:49.475 20:13:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:49.475 20:13:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.475 20:13:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:49.475 20:13:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.475 20:13:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:49.475 "name": "raid_bdev1", 00:17:49.475 "uuid": "20670f02-f7f3-471f-980a-8cd27c8e1fed", 00:17:49.475 "strip_size_kb": 0, 00:17:49.475 "state": "online", 00:17:49.475 "raid_level": "raid1", 00:17:49.475 "superblock": true, 00:17:49.475 "num_base_bdevs": 2, 00:17:49.475 "num_base_bdevs_discovered": 2, 00:17:49.475 "num_base_bdevs_operational": 2, 00:17:49.475 "base_bdevs_list": [ 00:17:49.475 { 00:17:49.475 "name": "spare", 00:17:49.476 "uuid": "b00a65c9-2be0-54fd-98ee-6510c223ead2", 00:17:49.476 "is_configured": true, 00:17:49.476 "data_offset": 256, 00:17:49.476 "data_size": 7936 00:17:49.476 }, 00:17:49.476 { 00:17:49.476 "name": "BaseBdev2", 00:17:49.476 "uuid": "6b077bf8-680b-5eee-a9ac-17b8621f14a7", 00:17:49.476 "is_configured": true, 00:17:49.476 "data_offset": 256, 00:17:49.476 "data_size": 7936 00:17:49.476 } 00:17:49.476 ] 00:17:49.476 }' 00:17:49.476 20:13:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:49.476 20:13:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:49.476 20:13:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:49.736 20:13:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:49.736 20:13:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:49.736 20:13:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:17:49.736 20:13:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.736 20:13:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:49.736 20:13:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.736 20:13:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:17:49.736 20:13:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:49.736 20:13:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.736 20:13:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:49.736 [2024-12-08 20:13:21.551932] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:49.736 20:13:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.736 20:13:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:49.736 20:13:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:49.736 20:13:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:49.736 20:13:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:49.736 20:13:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:49.736 20:13:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:49.736 20:13:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:49.736 20:13:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:49.736 20:13:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:49.736 20:13:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:49.736 20:13:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:49.736 20:13:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:49.736 20:13:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.736 20:13:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:49.736 20:13:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.736 20:13:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:49.736 "name": "raid_bdev1", 00:17:49.736 "uuid": "20670f02-f7f3-471f-980a-8cd27c8e1fed", 00:17:49.736 "strip_size_kb": 0, 00:17:49.736 "state": "online", 00:17:49.736 "raid_level": "raid1", 00:17:49.736 "superblock": true, 00:17:49.736 "num_base_bdevs": 2, 00:17:49.736 "num_base_bdevs_discovered": 1, 00:17:49.736 "num_base_bdevs_operational": 1, 00:17:49.736 "base_bdevs_list": [ 00:17:49.736 { 00:17:49.736 "name": null, 00:17:49.736 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:49.736 "is_configured": false, 00:17:49.736 "data_offset": 0, 00:17:49.736 "data_size": 7936 00:17:49.736 }, 00:17:49.736 { 00:17:49.736 "name": "BaseBdev2", 00:17:49.736 "uuid": "6b077bf8-680b-5eee-a9ac-17b8621f14a7", 00:17:49.736 "is_configured": true, 00:17:49.736 "data_offset": 256, 00:17:49.736 "data_size": 7936 00:17:49.736 } 00:17:49.736 ] 00:17:49.736 }' 00:17:49.736 20:13:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:49.736 20:13:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:50.306 20:13:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:50.306 20:13:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.306 20:13:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:50.306 [2024-12-08 20:13:21.999150] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:50.306 [2024-12-08 20:13:21.999345] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:50.306 [2024-12-08 20:13:21.999361] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:50.306 [2024-12-08 20:13:21.999398] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:50.306 [2024-12-08 20:13:22.015253] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:17:50.306 20:13:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.306 20:13:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@757 -- # sleep 1 00:17:50.306 [2024-12-08 20:13:22.017044] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:51.247 20:13:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:51.247 20:13:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:51.247 20:13:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:51.247 20:13:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:51.247 20:13:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:51.247 20:13:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:51.247 20:13:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.247 20:13:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:51.247 20:13:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:51.247 20:13:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.247 20:13:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:51.247 "name": "raid_bdev1", 00:17:51.247 "uuid": "20670f02-f7f3-471f-980a-8cd27c8e1fed", 00:17:51.247 "strip_size_kb": 0, 00:17:51.247 "state": "online", 00:17:51.247 "raid_level": "raid1", 00:17:51.247 "superblock": true, 00:17:51.247 "num_base_bdevs": 2, 00:17:51.247 "num_base_bdevs_discovered": 2, 00:17:51.247 "num_base_bdevs_operational": 2, 00:17:51.247 "process": { 00:17:51.247 "type": "rebuild", 00:17:51.247 "target": "spare", 00:17:51.247 "progress": { 00:17:51.247 "blocks": 2560, 00:17:51.247 "percent": 32 00:17:51.247 } 00:17:51.247 }, 00:17:51.247 "base_bdevs_list": [ 00:17:51.247 { 00:17:51.247 "name": "spare", 00:17:51.247 "uuid": "b00a65c9-2be0-54fd-98ee-6510c223ead2", 00:17:51.247 "is_configured": true, 00:17:51.247 "data_offset": 256, 00:17:51.247 "data_size": 7936 00:17:51.247 }, 00:17:51.247 { 00:17:51.247 "name": "BaseBdev2", 00:17:51.247 "uuid": "6b077bf8-680b-5eee-a9ac-17b8621f14a7", 00:17:51.247 "is_configured": true, 00:17:51.247 "data_offset": 256, 00:17:51.247 "data_size": 7936 00:17:51.247 } 00:17:51.247 ] 00:17:51.247 }' 00:17:51.247 20:13:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:51.247 20:13:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:51.247 20:13:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:51.247 20:13:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:51.247 20:13:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:17:51.247 20:13:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.247 20:13:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:51.247 [2024-12-08 20:13:23.180937] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:51.247 [2024-12-08 20:13:23.221828] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:51.247 [2024-12-08 20:13:23.221954] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:51.247 [2024-12-08 20:13:23.222001] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:51.247 [2024-12-08 20:13:23.222026] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:51.507 20:13:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.507 20:13:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:51.507 20:13:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:51.507 20:13:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:51.507 20:13:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:51.507 20:13:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:51.507 20:13:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:51.507 20:13:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:51.507 20:13:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:51.507 20:13:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:51.507 20:13:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:51.507 20:13:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:51.507 20:13:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.507 20:13:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:51.507 20:13:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:51.508 20:13:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.508 20:13:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:51.508 "name": "raid_bdev1", 00:17:51.508 "uuid": "20670f02-f7f3-471f-980a-8cd27c8e1fed", 00:17:51.508 "strip_size_kb": 0, 00:17:51.508 "state": "online", 00:17:51.508 "raid_level": "raid1", 00:17:51.508 "superblock": true, 00:17:51.508 "num_base_bdevs": 2, 00:17:51.508 "num_base_bdevs_discovered": 1, 00:17:51.508 "num_base_bdevs_operational": 1, 00:17:51.508 "base_bdevs_list": [ 00:17:51.508 { 00:17:51.508 "name": null, 00:17:51.508 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:51.508 "is_configured": false, 00:17:51.508 "data_offset": 0, 00:17:51.508 "data_size": 7936 00:17:51.508 }, 00:17:51.508 { 00:17:51.508 "name": "BaseBdev2", 00:17:51.508 "uuid": "6b077bf8-680b-5eee-a9ac-17b8621f14a7", 00:17:51.508 "is_configured": true, 00:17:51.508 "data_offset": 256, 00:17:51.508 "data_size": 7936 00:17:51.508 } 00:17:51.508 ] 00:17:51.508 }' 00:17:51.508 20:13:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:51.508 20:13:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:51.768 20:13:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:51.769 20:13:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.769 20:13:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:51.769 [2024-12-08 20:13:23.687916] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:51.769 [2024-12-08 20:13:23.687994] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:51.769 [2024-12-08 20:13:23.688024] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:17:51.769 [2024-12-08 20:13:23.688036] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:51.769 [2024-12-08 20:13:23.688235] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:51.769 [2024-12-08 20:13:23.688260] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:51.769 [2024-12-08 20:13:23.688353] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:51.769 [2024-12-08 20:13:23.688368] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:51.769 [2024-12-08 20:13:23.688378] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:51.769 [2024-12-08 20:13:23.688417] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:51.769 [2024-12-08 20:13:23.703976] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:17:51.769 spare 00:17:51.769 20:13:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.769 20:13:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@764 -- # sleep 1 00:17:51.769 [2024-12-08 20:13:23.705741] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:53.163 20:13:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:53.163 20:13:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:53.163 20:13:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:53.163 20:13:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:53.163 20:13:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:53.163 20:13:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:53.163 20:13:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:53.163 20:13:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.163 20:13:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:53.163 20:13:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.163 20:13:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:53.163 "name": "raid_bdev1", 00:17:53.163 "uuid": "20670f02-f7f3-471f-980a-8cd27c8e1fed", 00:17:53.163 "strip_size_kb": 0, 00:17:53.163 "state": "online", 00:17:53.163 "raid_level": "raid1", 00:17:53.163 "superblock": true, 00:17:53.163 "num_base_bdevs": 2, 00:17:53.163 "num_base_bdevs_discovered": 2, 00:17:53.163 "num_base_bdevs_operational": 2, 00:17:53.163 "process": { 00:17:53.163 "type": "rebuild", 00:17:53.163 "target": "spare", 00:17:53.163 "progress": { 00:17:53.163 "blocks": 2560, 00:17:53.163 "percent": 32 00:17:53.163 } 00:17:53.163 }, 00:17:53.163 "base_bdevs_list": [ 00:17:53.163 { 00:17:53.163 "name": "spare", 00:17:53.163 "uuid": "b00a65c9-2be0-54fd-98ee-6510c223ead2", 00:17:53.163 "is_configured": true, 00:17:53.163 "data_offset": 256, 00:17:53.163 "data_size": 7936 00:17:53.163 }, 00:17:53.163 { 00:17:53.163 "name": "BaseBdev2", 00:17:53.163 "uuid": "6b077bf8-680b-5eee-a9ac-17b8621f14a7", 00:17:53.163 "is_configured": true, 00:17:53.163 "data_offset": 256, 00:17:53.163 "data_size": 7936 00:17:53.163 } 00:17:53.163 ] 00:17:53.163 }' 00:17:53.163 20:13:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:53.163 20:13:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:53.163 20:13:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:53.163 20:13:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:53.163 20:13:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:17:53.163 20:13:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.163 20:13:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:53.163 [2024-12-08 20:13:24.845593] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:53.163 [2024-12-08 20:13:24.910471] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:53.163 [2024-12-08 20:13:24.910522] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:53.163 [2024-12-08 20:13:24.910555] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:53.163 [2024-12-08 20:13:24.910562] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:53.163 20:13:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.163 20:13:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:53.163 20:13:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:53.163 20:13:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:53.163 20:13:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:53.163 20:13:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:53.163 20:13:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:53.163 20:13:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:53.163 20:13:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:53.163 20:13:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:53.163 20:13:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:53.163 20:13:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:53.163 20:13:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:53.163 20:13:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.163 20:13:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:53.163 20:13:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.163 20:13:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:53.163 "name": "raid_bdev1", 00:17:53.163 "uuid": "20670f02-f7f3-471f-980a-8cd27c8e1fed", 00:17:53.163 "strip_size_kb": 0, 00:17:53.163 "state": "online", 00:17:53.163 "raid_level": "raid1", 00:17:53.163 "superblock": true, 00:17:53.163 "num_base_bdevs": 2, 00:17:53.163 "num_base_bdevs_discovered": 1, 00:17:53.163 "num_base_bdevs_operational": 1, 00:17:53.163 "base_bdevs_list": [ 00:17:53.163 { 00:17:53.163 "name": null, 00:17:53.163 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:53.163 "is_configured": false, 00:17:53.163 "data_offset": 0, 00:17:53.163 "data_size": 7936 00:17:53.163 }, 00:17:53.163 { 00:17:53.163 "name": "BaseBdev2", 00:17:53.163 "uuid": "6b077bf8-680b-5eee-a9ac-17b8621f14a7", 00:17:53.163 "is_configured": true, 00:17:53.163 "data_offset": 256, 00:17:53.164 "data_size": 7936 00:17:53.164 } 00:17:53.164 ] 00:17:53.164 }' 00:17:53.164 20:13:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:53.164 20:13:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:53.438 20:13:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:53.438 20:13:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:53.438 20:13:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:53.438 20:13:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:53.438 20:13:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:53.438 20:13:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:53.438 20:13:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.438 20:13:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:53.438 20:13:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:53.438 20:13:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.700 20:13:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:53.700 "name": "raid_bdev1", 00:17:53.700 "uuid": "20670f02-f7f3-471f-980a-8cd27c8e1fed", 00:17:53.700 "strip_size_kb": 0, 00:17:53.700 "state": "online", 00:17:53.700 "raid_level": "raid1", 00:17:53.700 "superblock": true, 00:17:53.700 "num_base_bdevs": 2, 00:17:53.700 "num_base_bdevs_discovered": 1, 00:17:53.700 "num_base_bdevs_operational": 1, 00:17:53.700 "base_bdevs_list": [ 00:17:53.700 { 00:17:53.700 "name": null, 00:17:53.700 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:53.700 "is_configured": false, 00:17:53.700 "data_offset": 0, 00:17:53.700 "data_size": 7936 00:17:53.700 }, 00:17:53.700 { 00:17:53.700 "name": "BaseBdev2", 00:17:53.700 "uuid": "6b077bf8-680b-5eee-a9ac-17b8621f14a7", 00:17:53.700 "is_configured": true, 00:17:53.700 "data_offset": 256, 00:17:53.700 "data_size": 7936 00:17:53.700 } 00:17:53.700 ] 00:17:53.700 }' 00:17:53.700 20:13:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:53.700 20:13:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:53.700 20:13:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:53.700 20:13:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:53.700 20:13:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:17:53.700 20:13:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.700 20:13:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:53.700 20:13:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.700 20:13:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:53.700 20:13:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.700 20:13:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:53.700 [2024-12-08 20:13:25.536077] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:53.700 [2024-12-08 20:13:25.536189] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:53.700 [2024-12-08 20:13:25.536217] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:17:53.700 [2024-12-08 20:13:25.536226] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:53.700 [2024-12-08 20:13:25.536442] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:53.700 [2024-12-08 20:13:25.536457] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:53.700 [2024-12-08 20:13:25.536510] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:17:53.700 [2024-12-08 20:13:25.536523] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:53.700 [2024-12-08 20:13:25.536532] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:53.700 [2024-12-08 20:13:25.536543] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:17:53.700 BaseBdev1 00:17:53.700 20:13:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.700 20:13:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # sleep 1 00:17:54.638 20:13:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:54.639 20:13:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:54.639 20:13:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:54.639 20:13:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:54.639 20:13:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:54.639 20:13:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:54.639 20:13:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:54.639 20:13:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:54.639 20:13:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:54.639 20:13:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:54.639 20:13:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:54.639 20:13:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:54.639 20:13:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.639 20:13:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:54.639 20:13:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.639 20:13:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:54.639 "name": "raid_bdev1", 00:17:54.639 "uuid": "20670f02-f7f3-471f-980a-8cd27c8e1fed", 00:17:54.639 "strip_size_kb": 0, 00:17:54.639 "state": "online", 00:17:54.639 "raid_level": "raid1", 00:17:54.639 "superblock": true, 00:17:54.639 "num_base_bdevs": 2, 00:17:54.639 "num_base_bdevs_discovered": 1, 00:17:54.639 "num_base_bdevs_operational": 1, 00:17:54.639 "base_bdevs_list": [ 00:17:54.639 { 00:17:54.639 "name": null, 00:17:54.639 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:54.639 "is_configured": false, 00:17:54.639 "data_offset": 0, 00:17:54.639 "data_size": 7936 00:17:54.639 }, 00:17:54.639 { 00:17:54.639 "name": "BaseBdev2", 00:17:54.639 "uuid": "6b077bf8-680b-5eee-a9ac-17b8621f14a7", 00:17:54.639 "is_configured": true, 00:17:54.639 "data_offset": 256, 00:17:54.639 "data_size": 7936 00:17:54.639 } 00:17:54.639 ] 00:17:54.639 }' 00:17:54.639 20:13:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:54.639 20:13:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:55.209 20:13:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:55.209 20:13:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:55.209 20:13:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:55.209 20:13:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:55.209 20:13:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:55.209 20:13:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:55.209 20:13:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:55.209 20:13:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.209 20:13:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:55.209 20:13:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.209 20:13:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:55.209 "name": "raid_bdev1", 00:17:55.209 "uuid": "20670f02-f7f3-471f-980a-8cd27c8e1fed", 00:17:55.209 "strip_size_kb": 0, 00:17:55.209 "state": "online", 00:17:55.209 "raid_level": "raid1", 00:17:55.209 "superblock": true, 00:17:55.209 "num_base_bdevs": 2, 00:17:55.209 "num_base_bdevs_discovered": 1, 00:17:55.209 "num_base_bdevs_operational": 1, 00:17:55.209 "base_bdevs_list": [ 00:17:55.209 { 00:17:55.209 "name": null, 00:17:55.209 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:55.209 "is_configured": false, 00:17:55.209 "data_offset": 0, 00:17:55.209 "data_size": 7936 00:17:55.209 }, 00:17:55.209 { 00:17:55.209 "name": "BaseBdev2", 00:17:55.209 "uuid": "6b077bf8-680b-5eee-a9ac-17b8621f14a7", 00:17:55.209 "is_configured": true, 00:17:55.209 "data_offset": 256, 00:17:55.209 "data_size": 7936 00:17:55.209 } 00:17:55.209 ] 00:17:55.209 }' 00:17:55.209 20:13:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:55.209 20:13:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:55.209 20:13:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:55.209 20:13:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:55.209 20:13:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:55.209 20:13:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@652 -- # local es=0 00:17:55.209 20:13:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:55.209 20:13:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:55.209 20:13:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:55.209 20:13:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:55.209 20:13:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:55.210 20:13:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:55.210 20:13:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.210 20:13:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:55.210 [2024-12-08 20:13:27.101451] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:55.210 [2024-12-08 20:13:27.101677] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:55.210 [2024-12-08 20:13:27.101739] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:55.210 request: 00:17:55.210 { 00:17:55.210 "base_bdev": "BaseBdev1", 00:17:55.210 "raid_bdev": "raid_bdev1", 00:17:55.210 "method": "bdev_raid_add_base_bdev", 00:17:55.210 "req_id": 1 00:17:55.210 } 00:17:55.210 Got JSON-RPC error response 00:17:55.210 response: 00:17:55.210 { 00:17:55.210 "code": -22, 00:17:55.210 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:17:55.210 } 00:17:55.210 20:13:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:55.210 20:13:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@655 -- # es=1 00:17:55.210 20:13:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:55.210 20:13:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:55.210 20:13:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:55.210 20:13:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # sleep 1 00:17:56.150 20:13:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:56.150 20:13:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:56.150 20:13:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:56.150 20:13:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:56.150 20:13:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:56.150 20:13:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:56.150 20:13:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:56.150 20:13:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:56.150 20:13:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:56.150 20:13:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:56.150 20:13:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:56.150 20:13:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:56.150 20:13:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.150 20:13:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:56.410 20:13:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.410 20:13:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:56.410 "name": "raid_bdev1", 00:17:56.410 "uuid": "20670f02-f7f3-471f-980a-8cd27c8e1fed", 00:17:56.410 "strip_size_kb": 0, 00:17:56.410 "state": "online", 00:17:56.410 "raid_level": "raid1", 00:17:56.410 "superblock": true, 00:17:56.410 "num_base_bdevs": 2, 00:17:56.410 "num_base_bdevs_discovered": 1, 00:17:56.410 "num_base_bdevs_operational": 1, 00:17:56.410 "base_bdevs_list": [ 00:17:56.410 { 00:17:56.410 "name": null, 00:17:56.410 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:56.410 "is_configured": false, 00:17:56.410 "data_offset": 0, 00:17:56.410 "data_size": 7936 00:17:56.410 }, 00:17:56.410 { 00:17:56.410 "name": "BaseBdev2", 00:17:56.410 "uuid": "6b077bf8-680b-5eee-a9ac-17b8621f14a7", 00:17:56.410 "is_configured": true, 00:17:56.410 "data_offset": 256, 00:17:56.410 "data_size": 7936 00:17:56.410 } 00:17:56.410 ] 00:17:56.410 }' 00:17:56.410 20:13:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:56.410 20:13:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:56.671 20:13:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:56.671 20:13:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:56.671 20:13:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:56.671 20:13:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:56.671 20:13:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:56.671 20:13:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:56.671 20:13:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:56.671 20:13:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.671 20:13:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:56.671 20:13:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.671 20:13:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:56.671 "name": "raid_bdev1", 00:17:56.671 "uuid": "20670f02-f7f3-471f-980a-8cd27c8e1fed", 00:17:56.671 "strip_size_kb": 0, 00:17:56.671 "state": "online", 00:17:56.671 "raid_level": "raid1", 00:17:56.671 "superblock": true, 00:17:56.671 "num_base_bdevs": 2, 00:17:56.671 "num_base_bdevs_discovered": 1, 00:17:56.671 "num_base_bdevs_operational": 1, 00:17:56.671 "base_bdevs_list": [ 00:17:56.671 { 00:17:56.671 "name": null, 00:17:56.671 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:56.671 "is_configured": false, 00:17:56.671 "data_offset": 0, 00:17:56.671 "data_size": 7936 00:17:56.671 }, 00:17:56.671 { 00:17:56.671 "name": "BaseBdev2", 00:17:56.671 "uuid": "6b077bf8-680b-5eee-a9ac-17b8621f14a7", 00:17:56.671 "is_configured": true, 00:17:56.671 "data_offset": 256, 00:17:56.671 "data_size": 7936 00:17:56.671 } 00:17:56.671 ] 00:17:56.671 }' 00:17:56.671 20:13:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:56.931 20:13:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:56.931 20:13:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:56.931 20:13:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:56.931 20:13:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # killprocess 88652 00:17:56.931 20:13:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 88652 ']' 00:17:56.931 20:13:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 88652 00:17:56.931 20:13:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:17:56.931 20:13:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:56.931 20:13:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88652 00:17:56.931 20:13:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:56.931 20:13:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:56.931 killing process with pid 88652 00:17:56.931 20:13:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88652' 00:17:56.931 20:13:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # kill 88652 00:17:56.931 Received shutdown signal, test time was about 60.000000 seconds 00:17:56.931 00:17:56.931 Latency(us) 00:17:56.931 [2024-12-08T20:13:28.909Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:56.931 [2024-12-08T20:13:28.909Z] =================================================================================================================== 00:17:56.931 [2024-12-08T20:13:28.910Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:56.932 [2024-12-08 20:13:28.754693] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:56.932 20:13:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@978 -- # wait 88652 00:17:56.932 [2024-12-08 20:13:28.754883] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:56.932 [2024-12-08 20:13:28.754936] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:56.932 [2024-12-08 20:13:28.754959] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:17:57.192 [2024-12-08 20:13:29.047530] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:58.153 20:13:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@786 -- # return 0 00:17:58.153 00:17:58.153 real 0m17.415s 00:17:58.153 user 0m22.866s 00:17:58.153 sys 0m1.572s 00:17:58.153 20:13:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:58.153 ************************************ 00:17:58.153 END TEST raid_rebuild_test_sb_md_interleaved 00:17:58.153 ************************************ 00:17:58.153 20:13:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:58.412 20:13:30 bdev_raid -- bdev/bdev_raid.sh@1015 -- # trap - EXIT 00:17:58.412 20:13:30 bdev_raid -- bdev/bdev_raid.sh@1016 -- # cleanup 00:17:58.412 20:13:30 bdev_raid -- bdev/bdev_raid.sh@56 -- # '[' -n 88652 ']' 00:17:58.412 20:13:30 bdev_raid -- bdev/bdev_raid.sh@56 -- # ps -p 88652 00:17:58.412 20:13:30 bdev_raid -- bdev/bdev_raid.sh@60 -- # rm -rf /raidtest 00:17:58.412 00:17:58.412 real 11m41.470s 00:17:58.412 user 15m47.161s 00:17:58.412 sys 1m48.511s 00:17:58.412 20:13:30 bdev_raid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:58.412 ************************************ 00:17:58.412 END TEST bdev_raid 00:17:58.412 ************************************ 00:17:58.412 20:13:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:58.412 20:13:30 -- spdk/autotest.sh@190 -- # run_test spdkcli_raid /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:17:58.412 20:13:30 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:58.412 20:13:30 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:58.412 20:13:30 -- common/autotest_common.sh@10 -- # set +x 00:17:58.412 ************************************ 00:17:58.412 START TEST spdkcli_raid 00:17:58.412 ************************************ 00:17:58.412 20:13:30 spdkcli_raid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:17:58.412 * Looking for test storage... 00:17:58.412 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:17:58.412 20:13:30 spdkcli_raid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:58.672 20:13:30 spdkcli_raid -- common/autotest_common.sh@1711 -- # lcov --version 00:17:58.672 20:13:30 spdkcli_raid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:58.672 20:13:30 spdkcli_raid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:58.672 20:13:30 spdkcli_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:58.672 20:13:30 spdkcli_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:58.672 20:13:30 spdkcli_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:58.672 20:13:30 spdkcli_raid -- scripts/common.sh@336 -- # IFS=.-: 00:17:58.672 20:13:30 spdkcli_raid -- scripts/common.sh@336 -- # read -ra ver1 00:17:58.672 20:13:30 spdkcli_raid -- scripts/common.sh@337 -- # IFS=.-: 00:17:58.672 20:13:30 spdkcli_raid -- scripts/common.sh@337 -- # read -ra ver2 00:17:58.672 20:13:30 spdkcli_raid -- scripts/common.sh@338 -- # local 'op=<' 00:17:58.672 20:13:30 spdkcli_raid -- scripts/common.sh@340 -- # ver1_l=2 00:17:58.672 20:13:30 spdkcli_raid -- scripts/common.sh@341 -- # ver2_l=1 00:17:58.672 20:13:30 spdkcli_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:58.672 20:13:30 spdkcli_raid -- scripts/common.sh@344 -- # case "$op" in 00:17:58.672 20:13:30 spdkcli_raid -- scripts/common.sh@345 -- # : 1 00:17:58.672 20:13:30 spdkcli_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:58.672 20:13:30 spdkcli_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:58.672 20:13:30 spdkcli_raid -- scripts/common.sh@365 -- # decimal 1 00:17:58.672 20:13:30 spdkcli_raid -- scripts/common.sh@353 -- # local d=1 00:17:58.672 20:13:30 spdkcli_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:58.672 20:13:30 spdkcli_raid -- scripts/common.sh@355 -- # echo 1 00:17:58.672 20:13:30 spdkcli_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:17:58.672 20:13:30 spdkcli_raid -- scripts/common.sh@366 -- # decimal 2 00:17:58.672 20:13:30 spdkcli_raid -- scripts/common.sh@353 -- # local d=2 00:17:58.672 20:13:30 spdkcli_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:58.672 20:13:30 spdkcli_raid -- scripts/common.sh@355 -- # echo 2 00:17:58.672 20:13:30 spdkcli_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:17:58.672 20:13:30 spdkcli_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:58.672 20:13:30 spdkcli_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:58.672 20:13:30 spdkcli_raid -- scripts/common.sh@368 -- # return 0 00:17:58.672 20:13:30 spdkcli_raid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:58.672 20:13:30 spdkcli_raid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:58.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:58.672 --rc genhtml_branch_coverage=1 00:17:58.672 --rc genhtml_function_coverage=1 00:17:58.672 --rc genhtml_legend=1 00:17:58.672 --rc geninfo_all_blocks=1 00:17:58.672 --rc geninfo_unexecuted_blocks=1 00:17:58.672 00:17:58.672 ' 00:17:58.672 20:13:30 spdkcli_raid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:58.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:58.672 --rc genhtml_branch_coverage=1 00:17:58.672 --rc genhtml_function_coverage=1 00:17:58.672 --rc genhtml_legend=1 00:17:58.672 --rc geninfo_all_blocks=1 00:17:58.672 --rc geninfo_unexecuted_blocks=1 00:17:58.672 00:17:58.672 ' 00:17:58.672 20:13:30 spdkcli_raid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:58.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:58.672 --rc genhtml_branch_coverage=1 00:17:58.672 --rc genhtml_function_coverage=1 00:17:58.672 --rc genhtml_legend=1 00:17:58.672 --rc geninfo_all_blocks=1 00:17:58.672 --rc geninfo_unexecuted_blocks=1 00:17:58.672 00:17:58.672 ' 00:17:58.672 20:13:30 spdkcli_raid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:58.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:58.672 --rc genhtml_branch_coverage=1 00:17:58.672 --rc genhtml_function_coverage=1 00:17:58.672 --rc genhtml_legend=1 00:17:58.672 --rc geninfo_all_blocks=1 00:17:58.672 --rc geninfo_unexecuted_blocks=1 00:17:58.672 00:17:58.672 ' 00:17:58.672 20:13:30 spdkcli_raid -- spdkcli/raid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:17:58.672 20:13:30 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:17:58.672 20:13:30 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:17:58.672 20:13:30 spdkcli_raid -- spdkcli/raid.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:17:58.672 20:13:30 spdkcli_raid -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:17:58.672 20:13:30 spdkcli_raid -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:17:58.672 20:13:30 spdkcli_raid -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:17:58.672 20:13:30 spdkcli_raid -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:17:58.672 20:13:30 spdkcli_raid -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:17:58.672 20:13:30 spdkcli_raid -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:17:58.672 20:13:30 spdkcli_raid -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:17:58.672 20:13:30 spdkcli_raid -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:17:58.672 20:13:30 spdkcli_raid -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:17:58.672 20:13:30 spdkcli_raid -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:17:58.672 20:13:30 spdkcli_raid -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:17:58.672 20:13:30 spdkcli_raid -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:17:58.672 20:13:30 spdkcli_raid -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:17:58.672 20:13:30 spdkcli_raid -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:17:58.672 20:13:30 spdkcli_raid -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:17:58.672 20:13:30 spdkcli_raid -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:17:58.672 20:13:30 spdkcli_raid -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:17:58.672 20:13:30 spdkcli_raid -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:17:58.672 20:13:30 spdkcli_raid -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:17:58.672 20:13:30 spdkcli_raid -- spdkcli/raid.sh@12 -- # MATCH_FILE=spdkcli_raid.test 00:17:58.672 20:13:30 spdkcli_raid -- spdkcli/raid.sh@13 -- # SPDKCLI_BRANCH=/bdevs 00:17:58.672 20:13:30 spdkcli_raid -- spdkcli/raid.sh@14 -- # dirname /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:17:58.672 20:13:30 spdkcli_raid -- spdkcli/raid.sh@14 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/spdkcli 00:17:58.672 20:13:30 spdkcli_raid -- spdkcli/raid.sh@14 -- # testdir=/home/vagrant/spdk_repo/spdk/test/spdkcli 00:17:58.672 20:13:30 spdkcli_raid -- spdkcli/raid.sh@15 -- # . /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:17:58.672 20:13:30 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:17:58.673 20:13:30 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:17:58.673 20:13:30 spdkcli_raid -- spdkcli/raid.sh@17 -- # trap cleanup EXIT 00:17:58.673 20:13:30 spdkcli_raid -- spdkcli/raid.sh@19 -- # timing_enter run_spdk_tgt 00:17:58.673 20:13:30 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:58.673 20:13:30 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:58.673 20:13:30 spdkcli_raid -- spdkcli/raid.sh@20 -- # run_spdk_tgt 00:17:58.673 20:13:30 spdkcli_raid -- spdkcli/common.sh@27 -- # spdk_tgt_pid=89329 00:17:58.673 20:13:30 spdkcli_raid -- spdkcli/common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:17:58.673 20:13:30 spdkcli_raid -- spdkcli/common.sh@28 -- # waitforlisten 89329 00:17:58.673 20:13:30 spdkcli_raid -- common/autotest_common.sh@835 -- # '[' -z 89329 ']' 00:17:58.673 20:13:30 spdkcli_raid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:58.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:58.673 20:13:30 spdkcli_raid -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:58.673 20:13:30 spdkcli_raid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:58.673 20:13:30 spdkcli_raid -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:58.673 20:13:30 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:58.673 [2024-12-08 20:13:30.620591] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:17:58.673 [2024-12-08 20:13:30.620697] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89329 ] 00:17:58.932 [2024-12-08 20:13:30.793548] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:58.932 [2024-12-08 20:13:30.899886] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:58.932 [2024-12-08 20:13:30.899922] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:59.907 20:13:31 spdkcli_raid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:59.907 20:13:31 spdkcli_raid -- common/autotest_common.sh@868 -- # return 0 00:17:59.907 20:13:31 spdkcli_raid -- spdkcli/raid.sh@21 -- # timing_exit run_spdk_tgt 00:17:59.907 20:13:31 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:59.907 20:13:31 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:59.907 20:13:31 spdkcli_raid -- spdkcli/raid.sh@23 -- # timing_enter spdkcli_create_malloc 00:17:59.907 20:13:31 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:59.907 20:13:31 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:59.907 20:13:31 spdkcli_raid -- spdkcli/raid.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 8 512 Malloc1'\'' '\''Malloc1'\'' True 00:17:59.907 '\''/bdevs/malloc create 8 512 Malloc2'\'' '\''Malloc2'\'' True 00:17:59.907 ' 00:18:01.813 Executing command: ['/bdevs/malloc create 8 512 Malloc1', 'Malloc1', True] 00:18:01.813 Executing command: ['/bdevs/malloc create 8 512 Malloc2', 'Malloc2', True] 00:18:01.813 20:13:33 spdkcli_raid -- spdkcli/raid.sh@27 -- # timing_exit spdkcli_create_malloc 00:18:01.813 20:13:33 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:01.813 20:13:33 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:01.813 20:13:33 spdkcli_raid -- spdkcli/raid.sh@29 -- # timing_enter spdkcli_create_raid 00:18:01.813 20:13:33 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:01.813 20:13:33 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:01.813 20:13:33 spdkcli_raid -- spdkcli/raid.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4'\'' '\''testraid'\'' True 00:18:01.813 ' 00:18:02.753 Executing command: ['/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4', 'testraid', True] 00:18:02.753 20:13:34 spdkcli_raid -- spdkcli/raid.sh@32 -- # timing_exit spdkcli_create_raid 00:18:02.753 20:13:34 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:02.753 20:13:34 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:02.753 20:13:34 spdkcli_raid -- spdkcli/raid.sh@34 -- # timing_enter spdkcli_check_match 00:18:02.753 20:13:34 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:02.753 20:13:34 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:02.753 20:13:34 spdkcli_raid -- spdkcli/raid.sh@35 -- # check_match 00:18:02.753 20:13:34 spdkcli_raid -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /bdevs 00:18:03.321 20:13:35 spdkcli_raid -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test.match 00:18:03.321 20:13:35 spdkcli_raid -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test 00:18:03.321 20:13:35 spdkcli_raid -- spdkcli/raid.sh@36 -- # timing_exit spdkcli_check_match 00:18:03.321 20:13:35 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:03.321 20:13:35 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:03.321 20:13:35 spdkcli_raid -- spdkcli/raid.sh@38 -- # timing_enter spdkcli_delete_raid 00:18:03.321 20:13:35 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:03.321 20:13:35 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:03.321 20:13:35 spdkcli_raid -- spdkcli/raid.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume delete testraid'\'' '\'''\'' True 00:18:03.321 ' 00:18:04.702 Executing command: ['/bdevs/raid_volume delete testraid', '', True] 00:18:04.702 20:13:36 spdkcli_raid -- spdkcli/raid.sh@41 -- # timing_exit spdkcli_delete_raid 00:18:04.702 20:13:36 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:04.702 20:13:36 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:04.702 20:13:36 spdkcli_raid -- spdkcli/raid.sh@43 -- # timing_enter spdkcli_delete_malloc 00:18:04.702 20:13:36 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:04.702 20:13:36 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:04.702 20:13:36 spdkcli_raid -- spdkcli/raid.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc delete Malloc1'\'' '\'''\'' True 00:18:04.702 '\''/bdevs/malloc delete Malloc2'\'' '\'''\'' True 00:18:04.702 ' 00:18:06.085 Executing command: ['/bdevs/malloc delete Malloc1', '', True] 00:18:06.085 Executing command: ['/bdevs/malloc delete Malloc2', '', True] 00:18:06.085 20:13:37 spdkcli_raid -- spdkcli/raid.sh@47 -- # timing_exit spdkcli_delete_malloc 00:18:06.085 20:13:37 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:06.085 20:13:37 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:06.085 20:13:37 spdkcli_raid -- spdkcli/raid.sh@49 -- # killprocess 89329 00:18:06.085 20:13:37 spdkcli_raid -- common/autotest_common.sh@954 -- # '[' -z 89329 ']' 00:18:06.085 20:13:37 spdkcli_raid -- common/autotest_common.sh@958 -- # kill -0 89329 00:18:06.085 20:13:37 spdkcli_raid -- common/autotest_common.sh@959 -- # uname 00:18:06.085 20:13:37 spdkcli_raid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:06.085 20:13:37 spdkcli_raid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89329 00:18:06.085 killing process with pid 89329 00:18:06.085 20:13:37 spdkcli_raid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:06.085 20:13:37 spdkcli_raid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:06.085 20:13:37 spdkcli_raid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89329' 00:18:06.085 20:13:37 spdkcli_raid -- common/autotest_common.sh@973 -- # kill 89329 00:18:06.085 20:13:37 spdkcli_raid -- common/autotest_common.sh@978 -- # wait 89329 00:18:08.627 20:13:40 spdkcli_raid -- spdkcli/raid.sh@1 -- # cleanup 00:18:08.627 20:13:40 spdkcli_raid -- spdkcli/common.sh@10 -- # '[' -n 89329 ']' 00:18:08.628 20:13:40 spdkcli_raid -- spdkcli/common.sh@11 -- # killprocess 89329 00:18:08.628 20:13:40 spdkcli_raid -- common/autotest_common.sh@954 -- # '[' -z 89329 ']' 00:18:08.628 20:13:40 spdkcli_raid -- common/autotest_common.sh@958 -- # kill -0 89329 00:18:08.628 Process with pid 89329 is not found 00:18:08.628 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (89329) - No such process 00:18:08.628 20:13:40 spdkcli_raid -- common/autotest_common.sh@981 -- # echo 'Process with pid 89329 is not found' 00:18:08.628 20:13:40 spdkcli_raid -- spdkcli/common.sh@13 -- # '[' -n '' ']' 00:18:08.628 20:13:40 spdkcli_raid -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:18:08.628 20:13:40 spdkcli_raid -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:18:08.628 20:13:40 spdkcli_raid -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_raid.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:18:08.628 ************************************ 00:18:08.628 END TEST spdkcli_raid 00:18:08.628 ************************************ 00:18:08.628 00:18:08.628 real 0m10.069s 00:18:08.628 user 0m20.786s 00:18:08.628 sys 0m1.120s 00:18:08.628 20:13:40 spdkcli_raid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:08.628 20:13:40 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:08.628 20:13:40 -- spdk/autotest.sh@191 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:18:08.628 20:13:40 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:08.628 20:13:40 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:08.628 20:13:40 -- common/autotest_common.sh@10 -- # set +x 00:18:08.628 ************************************ 00:18:08.628 START TEST blockdev_raid5f 00:18:08.628 ************************************ 00:18:08.628 20:13:40 blockdev_raid5f -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:18:08.628 * Looking for test storage... 00:18:08.628 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:18:08.628 20:13:40 blockdev_raid5f -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:08.628 20:13:40 blockdev_raid5f -- common/autotest_common.sh@1711 -- # lcov --version 00:18:08.628 20:13:40 blockdev_raid5f -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:08.628 20:13:40 blockdev_raid5f -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:08.628 20:13:40 blockdev_raid5f -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:08.628 20:13:40 blockdev_raid5f -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:08.628 20:13:40 blockdev_raid5f -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:08.628 20:13:40 blockdev_raid5f -- scripts/common.sh@336 -- # IFS=.-: 00:18:08.628 20:13:40 blockdev_raid5f -- scripts/common.sh@336 -- # read -ra ver1 00:18:08.628 20:13:40 blockdev_raid5f -- scripts/common.sh@337 -- # IFS=.-: 00:18:08.628 20:13:40 blockdev_raid5f -- scripts/common.sh@337 -- # read -ra ver2 00:18:08.628 20:13:40 blockdev_raid5f -- scripts/common.sh@338 -- # local 'op=<' 00:18:08.628 20:13:40 blockdev_raid5f -- scripts/common.sh@340 -- # ver1_l=2 00:18:08.628 20:13:40 blockdev_raid5f -- scripts/common.sh@341 -- # ver2_l=1 00:18:08.628 20:13:40 blockdev_raid5f -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:08.628 20:13:40 blockdev_raid5f -- scripts/common.sh@344 -- # case "$op" in 00:18:08.628 20:13:40 blockdev_raid5f -- scripts/common.sh@345 -- # : 1 00:18:08.628 20:13:40 blockdev_raid5f -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:08.628 20:13:40 blockdev_raid5f -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:08.889 20:13:40 blockdev_raid5f -- scripts/common.sh@365 -- # decimal 1 00:18:08.889 20:13:40 blockdev_raid5f -- scripts/common.sh@353 -- # local d=1 00:18:08.889 20:13:40 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:08.889 20:13:40 blockdev_raid5f -- scripts/common.sh@355 -- # echo 1 00:18:08.889 20:13:40 blockdev_raid5f -- scripts/common.sh@365 -- # ver1[v]=1 00:18:08.889 20:13:40 blockdev_raid5f -- scripts/common.sh@366 -- # decimal 2 00:18:08.889 20:13:40 blockdev_raid5f -- scripts/common.sh@353 -- # local d=2 00:18:08.889 20:13:40 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:08.889 20:13:40 blockdev_raid5f -- scripts/common.sh@355 -- # echo 2 00:18:08.889 20:13:40 blockdev_raid5f -- scripts/common.sh@366 -- # ver2[v]=2 00:18:08.889 20:13:40 blockdev_raid5f -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:08.889 20:13:40 blockdev_raid5f -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:08.889 20:13:40 blockdev_raid5f -- scripts/common.sh@368 -- # return 0 00:18:08.889 20:13:40 blockdev_raid5f -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:08.889 20:13:40 blockdev_raid5f -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:08.889 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:08.889 --rc genhtml_branch_coverage=1 00:18:08.889 --rc genhtml_function_coverage=1 00:18:08.889 --rc genhtml_legend=1 00:18:08.889 --rc geninfo_all_blocks=1 00:18:08.889 --rc geninfo_unexecuted_blocks=1 00:18:08.889 00:18:08.889 ' 00:18:08.889 20:13:40 blockdev_raid5f -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:08.889 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:08.889 --rc genhtml_branch_coverage=1 00:18:08.889 --rc genhtml_function_coverage=1 00:18:08.889 --rc genhtml_legend=1 00:18:08.889 --rc geninfo_all_blocks=1 00:18:08.889 --rc geninfo_unexecuted_blocks=1 00:18:08.889 00:18:08.889 ' 00:18:08.889 20:13:40 blockdev_raid5f -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:08.889 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:08.889 --rc genhtml_branch_coverage=1 00:18:08.889 --rc genhtml_function_coverage=1 00:18:08.889 --rc genhtml_legend=1 00:18:08.889 --rc geninfo_all_blocks=1 00:18:08.889 --rc geninfo_unexecuted_blocks=1 00:18:08.889 00:18:08.889 ' 00:18:08.889 20:13:40 blockdev_raid5f -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:08.889 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:08.889 --rc genhtml_branch_coverage=1 00:18:08.889 --rc genhtml_function_coverage=1 00:18:08.889 --rc genhtml_legend=1 00:18:08.889 --rc geninfo_all_blocks=1 00:18:08.889 --rc geninfo_unexecuted_blocks=1 00:18:08.889 00:18:08.889 ' 00:18:08.889 20:13:40 blockdev_raid5f -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:18:08.889 20:13:40 blockdev_raid5f -- bdev/nbd_common.sh@6 -- # set -e 00:18:08.889 20:13:40 blockdev_raid5f -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:18:08.889 20:13:40 blockdev_raid5f -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:18:08.889 20:13:40 blockdev_raid5f -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:18:08.889 20:13:40 blockdev_raid5f -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:18:08.889 20:13:40 blockdev_raid5f -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:18:08.889 20:13:40 blockdev_raid5f -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:18:08.889 20:13:40 blockdev_raid5f -- bdev/blockdev.sh@20 -- # : 00:18:08.889 20:13:40 blockdev_raid5f -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:18:08.889 20:13:40 blockdev_raid5f -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:18:08.889 20:13:40 blockdev_raid5f -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:18:08.889 20:13:40 blockdev_raid5f -- bdev/blockdev.sh@711 -- # uname -s 00:18:08.889 20:13:40 blockdev_raid5f -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:18:08.889 20:13:40 blockdev_raid5f -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:18:08.889 20:13:40 blockdev_raid5f -- bdev/blockdev.sh@719 -- # test_type=raid5f 00:18:08.889 20:13:40 blockdev_raid5f -- bdev/blockdev.sh@720 -- # crypto_device= 00:18:08.889 20:13:40 blockdev_raid5f -- bdev/blockdev.sh@721 -- # dek= 00:18:08.889 20:13:40 blockdev_raid5f -- bdev/blockdev.sh@722 -- # env_ctx= 00:18:08.889 20:13:40 blockdev_raid5f -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:18:08.889 20:13:40 blockdev_raid5f -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:18:08.889 20:13:40 blockdev_raid5f -- bdev/blockdev.sh@727 -- # [[ raid5f == bdev ]] 00:18:08.889 20:13:40 blockdev_raid5f -- bdev/blockdev.sh@727 -- # [[ raid5f == crypto_* ]] 00:18:08.889 20:13:40 blockdev_raid5f -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:18:08.889 20:13:40 blockdev_raid5f -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=89609 00:18:08.889 20:13:40 blockdev_raid5f -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:18:08.889 20:13:40 blockdev_raid5f -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:18:08.889 20:13:40 blockdev_raid5f -- bdev/blockdev.sh@49 -- # waitforlisten 89609 00:18:08.889 20:13:40 blockdev_raid5f -- common/autotest_common.sh@835 -- # '[' -z 89609 ']' 00:18:08.889 20:13:40 blockdev_raid5f -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:08.889 20:13:40 blockdev_raid5f -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:08.889 20:13:40 blockdev_raid5f -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:08.889 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:08.889 20:13:40 blockdev_raid5f -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:08.889 20:13:40 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:08.889 [2024-12-08 20:13:40.737806] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:18:08.889 [2024-12-08 20:13:40.738023] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89609 ] 00:18:09.149 [2024-12-08 20:13:40.910351] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:09.149 [2024-12-08 20:13:41.014824] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:10.090 20:13:41 blockdev_raid5f -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:10.090 20:13:41 blockdev_raid5f -- common/autotest_common.sh@868 -- # return 0 00:18:10.090 20:13:41 blockdev_raid5f -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:18:10.090 20:13:41 blockdev_raid5f -- bdev/blockdev.sh@763 -- # setup_raid5f_conf 00:18:10.090 20:13:41 blockdev_raid5f -- bdev/blockdev.sh@279 -- # rpc_cmd 00:18:10.090 20:13:41 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.090 20:13:41 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:10.090 Malloc0 00:18:10.090 Malloc1 00:18:10.090 Malloc2 00:18:10.090 20:13:41 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.090 20:13:41 blockdev_raid5f -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:18:10.090 20:13:41 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.090 20:13:41 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:10.090 20:13:41 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.090 20:13:41 blockdev_raid5f -- bdev/blockdev.sh@777 -- # cat 00:18:10.090 20:13:41 blockdev_raid5f -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:18:10.090 20:13:41 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.090 20:13:41 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:10.090 20:13:41 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.090 20:13:41 blockdev_raid5f -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:18:10.090 20:13:41 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.090 20:13:41 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:10.090 20:13:42 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.090 20:13:42 blockdev_raid5f -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:18:10.090 20:13:42 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.090 20:13:42 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:10.090 20:13:42 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.090 20:13:42 blockdev_raid5f -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:18:10.090 20:13:42 blockdev_raid5f -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:18:10.090 20:13:42 blockdev_raid5f -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:18:10.090 20:13:42 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.090 20:13:42 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:10.350 20:13:42 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.350 20:13:42 blockdev_raid5f -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:18:10.350 20:13:42 blockdev_raid5f -- bdev/blockdev.sh@786 -- # jq -r .name 00:18:10.350 20:13:42 blockdev_raid5f -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "d8545925-388a-4805-bc78-5856112a6d27"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "d8545925-388a-4805-bc78-5856112a6d27",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "d8545925-388a-4805-bc78-5856112a6d27",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "b0448d49-3786-44c7-9e19-1267fde821f7",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "ada9bfbc-c308-449a-8600-4b340e4cb5bf",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "5b2a7fe1-8bde-4f03-8405-a4e1a6f8a716",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:18:10.350 20:13:42 blockdev_raid5f -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:18:10.350 20:13:42 blockdev_raid5f -- bdev/blockdev.sh@789 -- # hello_world_bdev=raid5f 00:18:10.350 20:13:42 blockdev_raid5f -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:18:10.350 20:13:42 blockdev_raid5f -- bdev/blockdev.sh@791 -- # killprocess 89609 00:18:10.350 20:13:42 blockdev_raid5f -- common/autotest_common.sh@954 -- # '[' -z 89609 ']' 00:18:10.350 20:13:42 blockdev_raid5f -- common/autotest_common.sh@958 -- # kill -0 89609 00:18:10.350 20:13:42 blockdev_raid5f -- common/autotest_common.sh@959 -- # uname 00:18:10.350 20:13:42 blockdev_raid5f -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:10.350 20:13:42 blockdev_raid5f -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89609 00:18:10.350 killing process with pid 89609 00:18:10.350 20:13:42 blockdev_raid5f -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:10.350 20:13:42 blockdev_raid5f -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:10.350 20:13:42 blockdev_raid5f -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89609' 00:18:10.350 20:13:42 blockdev_raid5f -- common/autotest_common.sh@973 -- # kill 89609 00:18:10.350 20:13:42 blockdev_raid5f -- common/autotest_common.sh@978 -- # wait 89609 00:18:12.892 20:13:44 blockdev_raid5f -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:18:12.892 20:13:44 blockdev_raid5f -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:18:12.892 20:13:44 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:18:12.892 20:13:44 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:12.892 20:13:44 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:12.892 ************************************ 00:18:12.892 START TEST bdev_hello_world 00:18:12.892 ************************************ 00:18:12.892 20:13:44 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:18:12.892 [2024-12-08 20:13:44.819628] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:18:12.892 [2024-12-08 20:13:44.819747] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89671 ] 00:18:13.152 [2024-12-08 20:13:44.991428] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:13.152 [2024-12-08 20:13:45.097315] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:13.721 [2024-12-08 20:13:45.612591] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:18:13.721 [2024-12-08 20:13:45.612643] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:18:13.721 [2024-12-08 20:13:45.612659] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:18:13.721 [2024-12-08 20:13:45.613169] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:18:13.721 [2024-12-08 20:13:45.613315] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:18:13.721 [2024-12-08 20:13:45.613331] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:18:13.721 [2024-12-08 20:13:45.613375] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:18:13.721 00:18:13.721 [2024-12-08 20:13:45.613391] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:18:15.102 00:18:15.102 real 0m2.189s 00:18:15.102 user 0m1.826s 00:18:15.102 sys 0m0.237s 00:18:15.102 ************************************ 00:18:15.102 END TEST bdev_hello_world 00:18:15.102 ************************************ 00:18:15.102 20:13:46 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:15.102 20:13:46 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:18:15.102 20:13:46 blockdev_raid5f -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:18:15.102 20:13:46 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:15.102 20:13:46 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:15.102 20:13:46 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:15.102 ************************************ 00:18:15.102 START TEST bdev_bounds 00:18:15.102 ************************************ 00:18:15.102 20:13:46 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:18:15.102 Process bdevio pid: 89713 00:18:15.102 20:13:46 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=89713 00:18:15.102 20:13:46 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:18:15.102 20:13:46 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:18:15.102 20:13:46 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 89713' 00:18:15.102 20:13:46 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 89713 00:18:15.102 20:13:46 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 89713 ']' 00:18:15.102 20:13:46 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:15.102 20:13:46 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:15.102 20:13:46 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:15.102 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:15.102 20:13:46 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:15.102 20:13:46 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:18:15.362 [2024-12-08 20:13:47.081925] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:18:15.362 [2024-12-08 20:13:47.082123] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89713 ] 00:18:15.362 [2024-12-08 20:13:47.254389] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:15.621 [2024-12-08 20:13:47.361125] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:15.621 [2024-12-08 20:13:47.361272] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:15.621 [2024-12-08 20:13:47.361309] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:16.190 20:13:47 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:16.190 20:13:47 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:18:16.190 20:13:47 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:18:16.190 I/O targets: 00:18:16.190 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:18:16.190 00:18:16.190 00:18:16.190 CUnit - A unit testing framework for C - Version 2.1-3 00:18:16.190 http://cunit.sourceforge.net/ 00:18:16.190 00:18:16.190 00:18:16.190 Suite: bdevio tests on: raid5f 00:18:16.190 Test: blockdev write read block ...passed 00:18:16.190 Test: blockdev write zeroes read block ...passed 00:18:16.190 Test: blockdev write zeroes read no split ...passed 00:18:16.190 Test: blockdev write zeroes read split ...passed 00:18:16.450 Test: blockdev write zeroes read split partial ...passed 00:18:16.450 Test: blockdev reset ...passed 00:18:16.450 Test: blockdev write read 8 blocks ...passed 00:18:16.450 Test: blockdev write read size > 128k ...passed 00:18:16.450 Test: blockdev write read invalid size ...passed 00:18:16.450 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:16.450 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:16.450 Test: blockdev write read max offset ...passed 00:18:16.450 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:16.450 Test: blockdev writev readv 8 blocks ...passed 00:18:16.450 Test: blockdev writev readv 30 x 1block ...passed 00:18:16.450 Test: blockdev writev readv block ...passed 00:18:16.450 Test: blockdev writev readv size > 128k ...passed 00:18:16.450 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:16.450 Test: blockdev comparev and writev ...passed 00:18:16.450 Test: blockdev nvme passthru rw ...passed 00:18:16.450 Test: blockdev nvme passthru vendor specific ...passed 00:18:16.450 Test: blockdev nvme admin passthru ...passed 00:18:16.451 Test: blockdev copy ...passed 00:18:16.451 00:18:16.451 Run Summary: Type Total Ran Passed Failed Inactive 00:18:16.451 suites 1 1 n/a 0 0 00:18:16.451 tests 23 23 23 0 0 00:18:16.451 asserts 130 130 130 0 n/a 00:18:16.451 00:18:16.451 Elapsed time = 0.580 seconds 00:18:16.451 0 00:18:16.451 20:13:48 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 89713 00:18:16.451 20:13:48 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 89713 ']' 00:18:16.451 20:13:48 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 89713 00:18:16.451 20:13:48 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:18:16.451 20:13:48 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:16.451 20:13:48 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89713 00:18:16.451 20:13:48 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:16.451 20:13:48 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:16.451 killing process with pid 89713 00:18:16.451 20:13:48 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89713' 00:18:16.451 20:13:48 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@973 -- # kill 89713 00:18:16.451 20:13:48 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@978 -- # wait 89713 00:18:17.831 20:13:49 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:18:17.831 00:18:17.831 real 0m2.656s 00:18:17.831 user 0m6.621s 00:18:17.831 sys 0m0.346s 00:18:17.831 20:13:49 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:17.831 ************************************ 00:18:17.831 END TEST bdev_bounds 00:18:17.831 ************************************ 00:18:17.831 20:13:49 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:18:17.831 20:13:49 blockdev_raid5f -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:18:17.831 20:13:49 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:18:17.831 20:13:49 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:17.831 20:13:49 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:17.831 ************************************ 00:18:17.831 START TEST bdev_nbd 00:18:17.831 ************************************ 00:18:17.831 20:13:49 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:18:17.831 20:13:49 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:18:17.831 20:13:49 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:18:17.831 20:13:49 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:17.831 20:13:49 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:18:17.831 20:13:49 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('raid5f') 00:18:17.831 20:13:49 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:18:17.831 20:13:49 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=1 00:18:17.831 20:13:49 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:18:17.831 20:13:49 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:18:17.831 20:13:49 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:18:17.831 20:13:49 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=1 00:18:17.831 20:13:49 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0') 00:18:17.831 20:13:49 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:18:17.831 20:13:49 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('raid5f') 00:18:17.831 20:13:49 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:18:17.831 20:13:49 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=89778 00:18:17.831 20:13:49 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:18:17.831 20:13:49 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:18:17.831 20:13:49 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 89778 /var/tmp/spdk-nbd.sock 00:18:17.831 20:13:49 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 89778 ']' 00:18:17.831 20:13:49 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:18:17.831 20:13:49 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:17.831 20:13:49 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:18:17.831 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:18:17.831 20:13:49 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:17.831 20:13:49 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:18:18.092 [2024-12-08 20:13:49.810818] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:18:18.092 [2024-12-08 20:13:49.811021] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:18.092 [2024-12-08 20:13:49.984398] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:18.352 [2024-12-08 20:13:50.089361] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:18.924 20:13:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:18.924 20:13:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:18:18.924 20:13:50 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:18:18.924 20:13:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:18.924 20:13:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('raid5f') 00:18:18.924 20:13:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:18:18.924 20:13:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:18:18.924 20:13:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:18.924 20:13:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('raid5f') 00:18:18.924 20:13:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:18:18.924 20:13:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:18:18.924 20:13:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:18:18.924 20:13:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:18:18.924 20:13:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:18:18.924 20:13:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:18:18.924 20:13:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:18:18.924 20:13:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:18:18.924 20:13:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:18:18.924 20:13:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:18.924 20:13:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:18:18.924 20:13:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:18.924 20:13:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:18.924 20:13:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:18.924 20:13:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:18:18.924 20:13:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:18.924 20:13:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:18.924 20:13:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:18.924 1+0 records in 00:18:18.924 1+0 records out 00:18:18.924 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000773869 s, 5.3 MB/s 00:18:18.924 20:13:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:18.924 20:13:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:18:18.924 20:13:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:18.924 20:13:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:18.924 20:13:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:18:18.924 20:13:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:18:18.924 20:13:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:18:18.924 20:13:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:18:19.185 20:13:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:18:19.185 { 00:18:19.185 "nbd_device": "/dev/nbd0", 00:18:19.185 "bdev_name": "raid5f" 00:18:19.185 } 00:18:19.185 ]' 00:18:19.185 20:13:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:18:19.185 20:13:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:18:19.185 { 00:18:19.185 "nbd_device": "/dev/nbd0", 00:18:19.185 "bdev_name": "raid5f" 00:18:19.185 } 00:18:19.185 ]' 00:18:19.185 20:13:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:18:19.185 20:13:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:18:19.185 20:13:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:19.185 20:13:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:19.185 20:13:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:19.185 20:13:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:18:19.185 20:13:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:19.185 20:13:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:18:19.445 20:13:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:19.445 20:13:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:19.445 20:13:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:19.445 20:13:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:19.445 20:13:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:19.445 20:13:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:19.445 20:13:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:19.445 20:13:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:19.445 20:13:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:18:19.445 20:13:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:19.445 20:13:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:18:19.706 20:13:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:18:19.706 20:13:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:18:19.706 20:13:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:18:19.706 20:13:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:18:19.706 20:13:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:18:19.706 20:13:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:18:19.706 20:13:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:18:19.706 20:13:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:18:19.706 20:13:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:18:19.706 20:13:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:18:19.706 20:13:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:18:19.706 20:13:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:18:19.706 20:13:51 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:18:19.706 20:13:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:19.706 20:13:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('raid5f') 00:18:19.706 20:13:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:18:19.706 20:13:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:18:19.706 20:13:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:18:19.706 20:13:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:18:19.706 20:13:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:19.706 20:13:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('raid5f') 00:18:19.706 20:13:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:19.706 20:13:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:18:19.706 20:13:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:19.706 20:13:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:18:19.706 20:13:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:19.706 20:13:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:19.706 20:13:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:18:19.967 /dev/nbd0 00:18:19.967 20:13:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:19.967 20:13:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:19.967 20:13:51 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:19.967 20:13:51 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:18:19.967 20:13:51 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:19.967 20:13:51 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:19.967 20:13:51 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:19.967 20:13:51 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:18:19.967 20:13:51 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:19.967 20:13:51 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:19.967 20:13:51 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:19.967 1+0 records in 00:18:19.967 1+0 records out 00:18:19.967 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000554451 s, 7.4 MB/s 00:18:19.967 20:13:51 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:19.967 20:13:51 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:18:19.967 20:13:51 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:19.967 20:13:51 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:19.967 20:13:51 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:18:19.967 20:13:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:19.967 20:13:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:19.967 20:13:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:18:19.967 20:13:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:19.967 20:13:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:18:20.228 20:13:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:18:20.228 { 00:18:20.228 "nbd_device": "/dev/nbd0", 00:18:20.228 "bdev_name": "raid5f" 00:18:20.228 } 00:18:20.228 ]' 00:18:20.228 20:13:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:18:20.228 20:13:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:18:20.228 { 00:18:20.228 "nbd_device": "/dev/nbd0", 00:18:20.228 "bdev_name": "raid5f" 00:18:20.228 } 00:18:20.228 ]' 00:18:20.228 20:13:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:18:20.228 20:13:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:18:20.228 20:13:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:18:20.228 20:13:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1 00:18:20.228 20:13:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1 00:18:20.228 20:13:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1 00:18:20.228 20:13:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:18:20.228 20:13:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:18:20.228 20:13:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:18:20.228 20:13:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:18:20.228 20:13:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:18:20.228 20:13:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:18:20.228 20:13:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:18:20.228 20:13:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:18:20.228 256+0 records in 00:18:20.228 256+0 records out 00:18:20.228 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0136945 s, 76.6 MB/s 00:18:20.228 20:13:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:18:20.228 20:13:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:18:20.228 256+0 records in 00:18:20.228 256+0 records out 00:18:20.228 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0305607 s, 34.3 MB/s 00:18:20.228 20:13:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:18:20.228 20:13:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:18:20.228 20:13:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:18:20.228 20:13:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:18:20.228 20:13:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:18:20.228 20:13:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:18:20.228 20:13:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:18:20.228 20:13:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:18:20.228 20:13:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:18:20.228 20:13:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:18:20.228 20:13:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:18:20.228 20:13:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:20.228 20:13:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:20.228 20:13:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:20.228 20:13:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:18:20.228 20:13:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:20.228 20:13:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:18:20.488 20:13:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:20.488 20:13:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:20.488 20:13:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:20.488 20:13:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:20.488 20:13:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:20.488 20:13:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:20.488 20:13:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:20.488 20:13:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:20.488 20:13:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:18:20.488 20:13:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:20.488 20:13:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:18:20.749 20:13:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:18:20.749 20:13:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:18:20.749 20:13:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:18:20.749 20:13:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:18:20.749 20:13:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:18:20.749 20:13:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:18:20.749 20:13:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:18:20.749 20:13:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:18:20.749 20:13:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:18:20.749 20:13:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:18:20.749 20:13:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:18:20.749 20:13:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:18:20.749 20:13:52 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:18:20.749 20:13:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:20.749 20:13:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:18:20.749 20:13:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:18:21.009 malloc_lvol_verify 00:18:21.009 20:13:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:18:21.268 469896a4-efc5-4f9a-95dd-9ced627e48d9 00:18:21.268 20:13:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:18:21.268 0e76f09c-4551-45e2-81cb-1d932b7e1cc9 00:18:21.268 20:13:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:18:21.545 /dev/nbd0 00:18:21.545 20:13:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:18:21.545 20:13:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:18:21.545 20:13:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:18:21.545 20:13:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:18:21.545 20:13:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:18:21.545 mke2fs 1.47.0 (5-Feb-2023) 00:18:21.545 Discarding device blocks: 0/4096 done 00:18:21.546 Creating filesystem with 4096 1k blocks and 1024 inodes 00:18:21.546 00:18:21.546 Allocating group tables: 0/1 done 00:18:21.546 Writing inode tables: 0/1 done 00:18:21.546 Creating journal (1024 blocks): done 00:18:21.546 Writing superblocks and filesystem accounting information: 0/1 done 00:18:21.546 00:18:21.546 20:13:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:18:21.546 20:13:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:21.546 20:13:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:21.546 20:13:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:21.546 20:13:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:18:21.546 20:13:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:21.546 20:13:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:18:21.806 20:13:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:21.806 20:13:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:21.806 20:13:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:21.806 20:13:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:21.806 20:13:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:21.806 20:13:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:21.806 20:13:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:21.806 20:13:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:21.806 20:13:53 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 89778 00:18:21.806 20:13:53 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 89778 ']' 00:18:21.806 20:13:53 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 89778 00:18:21.806 20:13:53 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:18:21.806 20:13:53 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:21.806 20:13:53 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89778 00:18:21.806 20:13:53 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:21.806 20:13:53 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:21.806 killing process with pid 89778 00:18:21.806 20:13:53 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89778' 00:18:21.806 20:13:53 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@973 -- # kill 89778 00:18:21.806 20:13:53 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@978 -- # wait 89778 00:18:23.213 20:13:55 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:18:23.213 00:18:23.213 real 0m5.341s 00:18:23.213 user 0m7.240s 00:18:23.213 sys 0m1.216s 00:18:23.213 20:13:55 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:23.213 20:13:55 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:18:23.213 ************************************ 00:18:23.213 END TEST bdev_nbd 00:18:23.213 ************************************ 00:18:23.214 20:13:55 blockdev_raid5f -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:18:23.214 20:13:55 blockdev_raid5f -- bdev/blockdev.sh@801 -- # '[' raid5f = nvme ']' 00:18:23.214 20:13:55 blockdev_raid5f -- bdev/blockdev.sh@801 -- # '[' raid5f = gpt ']' 00:18:23.214 20:13:55 blockdev_raid5f -- bdev/blockdev.sh@805 -- # run_test bdev_fio fio_test_suite '' 00:18:23.214 20:13:55 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:23.214 20:13:55 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:23.214 20:13:55 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:23.214 ************************************ 00:18:23.214 START TEST bdev_fio 00:18:23.214 ************************************ 00:18:23.214 20:13:55 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:18:23.214 20:13:55 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:18:23.214 20:13:55 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:18:23.214 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:18:23.214 20:13:55 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:18:23.214 20:13:55 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:18:23.214 20:13:55 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:18:23.214 20:13:55 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:18:23.214 20:13:55 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:18:23.214 20:13:55 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:18:23.214 20:13:55 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:18:23.214 20:13:55 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:18:23.214 20:13:55 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:18:23.214 20:13:55 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:18:23.214 20:13:55 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:18:23.214 20:13:55 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:18:23.214 20:13:55 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:18:23.214 20:13:55 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:18:23.214 20:13:55 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:18:23.214 20:13:55 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:18:23.214 20:13:55 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:18:23.214 20:13:55 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:18:23.214 20:13:55 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:18:23.484 20:13:55 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:18:23.484 20:13:55 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:18:23.484 20:13:55 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:18:23.484 20:13:55 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid5f]' 00:18:23.484 20:13:55 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid5f 00:18:23.484 20:13:55 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:18:23.484 20:13:55 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:18:23.484 20:13:55 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:18:23.484 20:13:55 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:23.484 20:13:55 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:18:23.484 ************************************ 00:18:23.484 START TEST bdev_fio_rw_verify 00:18:23.484 ************************************ 00:18:23.484 20:13:55 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:18:23.484 20:13:55 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:18:23.484 20:13:55 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:23.484 20:13:55 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:23.485 20:13:55 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:23.485 20:13:55 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:23.485 20:13:55 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:18:23.485 20:13:55 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:23.485 20:13:55 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:23.485 20:13:55 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:18:23.485 20:13:55 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:23.485 20:13:55 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:23.485 20:13:55 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:23.485 20:13:55 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:23.485 20:13:55 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:18:23.485 20:13:55 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:23.485 20:13:55 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:18:23.743 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:18:23.743 fio-3.35 00:18:23.743 Starting 1 thread 00:18:35.969 00:18:35.969 job_raid5f: (groupid=0, jobs=1): err= 0: pid=89972: Sun Dec 8 20:14:06 2024 00:18:35.969 read: IOPS=12.2k, BW=47.5MiB/s (49.8MB/s)(475MiB/10001msec) 00:18:35.969 slat (nsec): min=17804, max=84644, avg=20018.14, stdev=2224.35 00:18:35.969 clat (usec): min=9, max=297, avg=131.40, stdev=47.60 00:18:35.969 lat (usec): min=28, max=323, avg=151.42, stdev=47.85 00:18:35.969 clat percentiles (usec): 00:18:35.969 | 50.000th=[ 133], 99.000th=[ 221], 99.900th=[ 251], 99.990th=[ 273], 00:18:35.969 | 99.999th=[ 289] 00:18:35.969 write: IOPS=12.8k, BW=49.9MiB/s (52.4MB/s)(493MiB/9872msec); 0 zone resets 00:18:35.969 slat (usec): min=7, max=159, avg=16.61, stdev= 3.36 00:18:35.969 clat (usec): min=57, max=526, avg=299.81, stdev=38.53 00:18:35.969 lat (usec): min=73, max=543, avg=316.42, stdev=39.10 00:18:35.969 clat percentiles (usec): 00:18:35.969 | 50.000th=[ 302], 99.000th=[ 392], 99.900th=[ 441], 99.990th=[ 486], 00:18:35.969 | 99.999th=[ 515] 00:18:35.969 bw ( KiB/s): min=46696, max=51992, per=98.70%, avg=50461.89, stdev=1227.28, samples=19 00:18:35.969 iops : min=11674, max=12998, avg=12615.47, stdev=306.82, samples=19 00:18:35.969 lat (usec) : 10=0.01%, 20=0.01%, 50=0.01%, 100=15.93%, 250=39.04% 00:18:35.969 lat (usec) : 500=45.03%, 750=0.01% 00:18:35.969 cpu : usr=99.15%, sys=0.27%, ctx=25, majf=0, minf=10003 00:18:35.969 IO depths : 1=7.6%, 2=19.8%, 4=55.2%, 8=17.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:35.969 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:35.969 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:35.969 issued rwts: total=121660,126184,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:35.969 latency : target=0, window=0, percentile=100.00%, depth=8 00:18:35.969 00:18:35.969 Run status group 0 (all jobs): 00:18:35.969 READ: bw=47.5MiB/s (49.8MB/s), 47.5MiB/s-47.5MiB/s (49.8MB/s-49.8MB/s), io=475MiB (498MB), run=10001-10001msec 00:18:35.969 WRITE: bw=49.9MiB/s (52.4MB/s), 49.9MiB/s-49.9MiB/s (52.4MB/s-52.4MB/s), io=493MiB (517MB), run=9872-9872msec 00:18:36.230 ----------------------------------------------------- 00:18:36.230 Suppressions used: 00:18:36.230 count bytes template 00:18:36.230 1 7 /usr/src/fio/parse.c 00:18:36.230 737 70752 /usr/src/fio/iolog.c 00:18:36.230 1 8 libtcmalloc_minimal.so 00:18:36.230 1 904 libcrypto.so 00:18:36.230 ----------------------------------------------------- 00:18:36.230 00:18:36.230 00:18:36.230 real 0m12.719s 00:18:36.230 user 0m12.465s 00:18:36.230 sys 0m0.818s 00:18:36.230 20:14:07 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:36.230 20:14:07 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:18:36.230 ************************************ 00:18:36.230 END TEST bdev_fio_rw_verify 00:18:36.230 ************************************ 00:18:36.230 20:14:08 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:18:36.230 20:14:08 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:18:36.230 20:14:08 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:18:36.230 20:14:08 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:18:36.230 20:14:08 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:18:36.230 20:14:08 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:18:36.230 20:14:08 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:18:36.230 20:14:08 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:18:36.230 20:14:08 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:18:36.230 20:14:08 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:18:36.230 20:14:08 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:18:36.230 20:14:08 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:18:36.230 20:14:08 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:18:36.230 20:14:08 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:18:36.230 20:14:08 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:18:36.230 20:14:08 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:18:36.230 20:14:08 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:18:36.230 20:14:08 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "d8545925-388a-4805-bc78-5856112a6d27"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "d8545925-388a-4805-bc78-5856112a6d27",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "d8545925-388a-4805-bc78-5856112a6d27",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "b0448d49-3786-44c7-9e19-1267fde821f7",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "ada9bfbc-c308-449a-8600-4b340e4cb5bf",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "5b2a7fe1-8bde-4f03-8405-a4e1a6f8a716",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:18:36.230 20:14:08 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:18:36.230 20:14:08 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:18:36.230 /home/vagrant/spdk_repo/spdk 00:18:36.230 20:14:08 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:18:36.230 20:14:08 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:18:36.230 20:14:08 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:18:36.230 00:18:36.230 real 0m12.993s 00:18:36.230 user 0m12.575s 00:18:36.230 sys 0m0.953s 00:18:36.230 20:14:08 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:36.230 20:14:08 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:18:36.230 ************************************ 00:18:36.230 END TEST bdev_fio 00:18:36.230 ************************************ 00:18:36.230 20:14:08 blockdev_raid5f -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:18:36.230 20:14:08 blockdev_raid5f -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:18:36.230 20:14:08 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:18:36.230 20:14:08 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:36.230 20:14:08 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:36.230 ************************************ 00:18:36.230 START TEST bdev_verify 00:18:36.230 ************************************ 00:18:36.230 20:14:08 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:18:36.491 [2024-12-08 20:14:08.269289] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:18:36.491 [2024-12-08 20:14:08.269406] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90141 ] 00:18:36.491 [2024-12-08 20:14:08.441183] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:36.751 [2024-12-08 20:14:08.548963] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:36.751 [2024-12-08 20:14:08.549014] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:37.322 Running I/O for 5 seconds... 00:18:39.204 16833.00 IOPS, 65.75 MiB/s [2024-12-08T20:14:12.123Z] 16887.50 IOPS, 65.97 MiB/s [2024-12-08T20:14:13.064Z] 16717.33 IOPS, 65.30 MiB/s [2024-12-08T20:14:14.446Z] 16440.50 IOPS, 64.22 MiB/s [2024-12-08T20:14:14.446Z] 16519.80 IOPS, 64.53 MiB/s 00:18:42.468 Latency(us) 00:18:42.468 [2024-12-08T20:14:14.446Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:42.468 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:42.468 Verification LBA range: start 0x0 length 0x2000 00:18:42.468 raid5f : 5.01 8228.95 32.14 0.00 0.00 23399.72 246.83 21177.57 00:18:42.468 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:42.468 Verification LBA range: start 0x2000 length 0x2000 00:18:42.468 raid5f : 5.02 8291.59 32.39 0.00 0.00 23227.62 88.54 21406.52 00:18:42.468 [2024-12-08T20:14:14.446Z] =================================================================================================================== 00:18:42.468 [2024-12-08T20:14:14.447Z] Total : 16520.54 64.53 0.00 0.00 23313.33 88.54 21406.52 00:18:43.851 00:18:43.851 real 0m7.232s 00:18:43.851 user 0m13.434s 00:18:43.851 sys 0m0.238s 00:18:43.851 20:14:15 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:43.851 20:14:15 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:18:43.851 ************************************ 00:18:43.851 END TEST bdev_verify 00:18:43.851 ************************************ 00:18:43.851 20:14:15 blockdev_raid5f -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:18:43.851 20:14:15 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:18:43.851 20:14:15 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:43.851 20:14:15 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:43.851 ************************************ 00:18:43.851 START TEST bdev_verify_big_io 00:18:43.851 ************************************ 00:18:43.851 20:14:15 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:18:43.851 [2024-12-08 20:14:15.571226] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:18:43.851 [2024-12-08 20:14:15.571342] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90238 ] 00:18:43.851 [2024-12-08 20:14:15.743345] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:44.111 [2024-12-08 20:14:15.854339] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:44.111 [2024-12-08 20:14:15.854371] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:44.681 Running I/O for 5 seconds... 00:18:46.571 758.00 IOPS, 47.38 MiB/s [2024-12-08T20:14:19.971Z] 761.00 IOPS, 47.56 MiB/s [2024-12-08T20:14:20.908Z] 823.67 IOPS, 51.48 MiB/s [2024-12-08T20:14:21.846Z] 840.50 IOPS, 52.53 MiB/s [2024-12-08T20:14:21.846Z] 863.20 IOPS, 53.95 MiB/s 00:18:49.868 Latency(us) 00:18:49.868 [2024-12-08T20:14:21.846Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:49.868 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:18:49.868 Verification LBA range: start 0x0 length 0x200 00:18:49.868 raid5f : 5.32 428.92 26.81 0.00 0.00 7439753.62 152.03 335178.01 00:18:49.868 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:18:49.868 Verification LBA range: start 0x200 length 0x200 00:18:49.868 raid5f : 5.32 429.64 26.85 0.00 0.00 7391124.94 237.89 333346.43 00:18:49.868 [2024-12-08T20:14:21.846Z] =================================================================================================================== 00:18:49.868 [2024-12-08T20:14:21.846Z] Total : 858.56 53.66 0.00 0.00 7415439.28 152.03 335178.01 00:18:51.247 ************************************ 00:18:51.247 END TEST bdev_verify_big_io 00:18:51.247 ************************************ 00:18:51.247 00:18:51.247 real 0m7.597s 00:18:51.247 user 0m14.137s 00:18:51.247 sys 0m0.255s 00:18:51.247 20:14:23 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:51.247 20:14:23 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:18:51.247 20:14:23 blockdev_raid5f -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:18:51.247 20:14:23 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:18:51.247 20:14:23 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:51.247 20:14:23 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:51.247 ************************************ 00:18:51.247 START TEST bdev_write_zeroes 00:18:51.247 ************************************ 00:18:51.247 20:14:23 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:18:51.507 [2024-12-08 20:14:23.236615] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:18:51.507 [2024-12-08 20:14:23.236738] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90342 ] 00:18:51.507 [2024-12-08 20:14:23.407223] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:51.766 [2024-12-08 20:14:23.513578] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:52.336 Running I/O for 1 seconds... 00:18:53.276 29391.00 IOPS, 114.81 MiB/s 00:18:53.276 Latency(us) 00:18:53.276 [2024-12-08T20:14:25.254Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:53.276 Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:18:53.276 raid5f : 1.01 29352.95 114.66 0.00 0.00 4347.28 1674.17 6124.32 00:18:53.276 [2024-12-08T20:14:25.254Z] =================================================================================================================== 00:18:53.276 [2024-12-08T20:14:25.254Z] Total : 29352.95 114.66 0.00 0.00 4347.28 1674.17 6124.32 00:18:54.657 00:18:54.657 real 0m3.209s 00:18:54.657 user 0m2.848s 00:18:54.657 sys 0m0.237s 00:18:54.657 20:14:26 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:54.657 20:14:26 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:18:54.657 ************************************ 00:18:54.657 END TEST bdev_write_zeroes 00:18:54.657 ************************************ 00:18:54.657 20:14:26 blockdev_raid5f -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:18:54.657 20:14:26 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:18:54.657 20:14:26 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:54.657 20:14:26 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:54.657 ************************************ 00:18:54.657 START TEST bdev_json_nonenclosed 00:18:54.657 ************************************ 00:18:54.658 20:14:26 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:18:54.658 [2024-12-08 20:14:26.516471] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:18:54.658 [2024-12-08 20:14:26.516571] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90395 ] 00:18:54.918 [2024-12-08 20:14:26.687182] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:54.918 [2024-12-08 20:14:26.790300] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:54.918 [2024-12-08 20:14:26.790411] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:18:54.918 [2024-12-08 20:14:26.790446] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:18:54.918 [2024-12-08 20:14:26.790458] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:18:55.178 00:18:55.178 real 0m0.595s 00:18:55.178 user 0m0.374s 00:18:55.178 sys 0m0.118s 00:18:55.178 20:14:27 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:55.178 20:14:27 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:18:55.178 ************************************ 00:18:55.178 END TEST bdev_json_nonenclosed 00:18:55.178 ************************************ 00:18:55.178 20:14:27 blockdev_raid5f -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:18:55.178 20:14:27 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:18:55.178 20:14:27 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:55.178 20:14:27 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:55.178 ************************************ 00:18:55.178 START TEST bdev_json_nonarray 00:18:55.178 ************************************ 00:18:55.178 20:14:27 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:18:55.439 [2024-12-08 20:14:27.182691] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:18:55.439 [2024-12-08 20:14:27.182787] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90415 ] 00:18:55.439 [2024-12-08 20:14:27.354103] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:55.707 [2024-12-08 20:14:27.462473] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:55.707 [2024-12-08 20:14:27.462582] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:18:55.707 [2024-12-08 20:14:27.462599] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:18:55.707 [2024-12-08 20:14:27.462618] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:18:55.968 00:18:55.968 real 0m0.600s 00:18:55.968 user 0m0.370s 00:18:55.968 sys 0m0.125s 00:18:55.968 20:14:27 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:55.968 20:14:27 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:18:55.968 ************************************ 00:18:55.968 END TEST bdev_json_nonarray 00:18:55.968 ************************************ 00:18:55.968 20:14:27 blockdev_raid5f -- bdev/blockdev.sh@824 -- # [[ raid5f == bdev ]] 00:18:55.968 20:14:27 blockdev_raid5f -- bdev/blockdev.sh@832 -- # [[ raid5f == gpt ]] 00:18:55.968 20:14:27 blockdev_raid5f -- bdev/blockdev.sh@836 -- # [[ raid5f == crypto_sw ]] 00:18:55.968 20:14:27 blockdev_raid5f -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:18:55.968 20:14:27 blockdev_raid5f -- bdev/blockdev.sh@849 -- # cleanup 00:18:55.968 20:14:27 blockdev_raid5f -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:18:55.968 20:14:27 blockdev_raid5f -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:18:55.968 20:14:27 blockdev_raid5f -- bdev/blockdev.sh@26 -- # [[ raid5f == rbd ]] 00:18:55.968 20:14:27 blockdev_raid5f -- bdev/blockdev.sh@30 -- # [[ raid5f == daos ]] 00:18:55.968 20:14:27 blockdev_raid5f -- bdev/blockdev.sh@34 -- # [[ raid5f = \g\p\t ]] 00:18:55.968 20:14:27 blockdev_raid5f -- bdev/blockdev.sh@40 -- # [[ raid5f == xnvme ]] 00:18:55.968 00:18:55.968 real 0m47.372s 00:18:55.968 user 1m3.870s 00:18:55.968 sys 0m4.760s 00:18:55.968 20:14:27 blockdev_raid5f -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:55.968 20:14:27 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:55.968 ************************************ 00:18:55.968 END TEST blockdev_raid5f 00:18:55.968 ************************************ 00:18:55.968 20:14:27 -- spdk/autotest.sh@194 -- # uname -s 00:18:55.968 20:14:27 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:18:55.968 20:14:27 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:18:55.968 20:14:27 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:18:55.968 20:14:27 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:18:55.968 20:14:27 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:18:55.968 20:14:27 -- spdk/autotest.sh@260 -- # timing_exit lib 00:18:55.968 20:14:27 -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:55.968 20:14:27 -- common/autotest_common.sh@10 -- # set +x 00:18:55.968 20:14:27 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:18:55.968 20:14:27 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:18:55.968 20:14:27 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:18:55.968 20:14:27 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:18:55.968 20:14:27 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:18:55.968 20:14:27 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:18:55.968 20:14:27 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:18:55.968 20:14:27 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:18:55.968 20:14:27 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:18:55.968 20:14:27 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:18:55.968 20:14:27 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:18:55.968 20:14:27 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:18:55.968 20:14:27 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:18:55.968 20:14:27 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:18:55.968 20:14:27 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:18:55.968 20:14:27 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:18:55.968 20:14:27 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:18:55.968 20:14:27 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:18:55.968 20:14:27 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:18:55.968 20:14:27 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:18:55.968 20:14:27 -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:55.968 20:14:27 -- common/autotest_common.sh@10 -- # set +x 00:18:55.968 20:14:27 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:18:55.968 20:14:27 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:18:55.968 20:14:27 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:18:55.968 20:14:27 -- common/autotest_common.sh@10 -- # set +x 00:18:58.512 INFO: APP EXITING 00:18:58.512 INFO: killing all VMs 00:18:58.512 INFO: killing vhost app 00:18:58.512 INFO: EXIT DONE 00:18:58.512 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:58.776 Waiting for block devices as requested 00:18:58.776 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:18:58.776 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:18:59.717 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:59.717 Cleaning 00:18:59.717 Removing: /var/run/dpdk/spdk0/config 00:18:59.717 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:18:59.717 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:18:59.717 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:18:59.717 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:18:59.717 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:18:59.717 Removing: /var/run/dpdk/spdk0/hugepage_info 00:18:59.717 Removing: /dev/shm/spdk_tgt_trace.pid56818 00:18:59.717 Removing: /var/run/dpdk/spdk0 00:18:59.717 Removing: /var/run/dpdk/spdk_pid56577 00:18:59.717 Removing: /var/run/dpdk/spdk_pid56818 00:18:59.717 Removing: /var/run/dpdk/spdk_pid57047 00:18:59.717 Removing: /var/run/dpdk/spdk_pid57151 00:18:59.717 Removing: /var/run/dpdk/spdk_pid57207 00:18:59.717 Removing: /var/run/dpdk/spdk_pid57335 00:18:59.717 Removing: /var/run/dpdk/spdk_pid57353 00:18:59.717 Removing: /var/run/dpdk/spdk_pid57563 00:18:59.717 Removing: /var/run/dpdk/spdk_pid57676 00:18:59.717 Removing: /var/run/dpdk/spdk_pid57783 00:18:59.717 Removing: /var/run/dpdk/spdk_pid57906 00:18:59.717 Removing: /var/run/dpdk/spdk_pid58014 00:18:59.717 Removing: /var/run/dpdk/spdk_pid58048 00:18:59.717 Removing: /var/run/dpdk/spdk_pid58090 00:18:59.717 Removing: /var/run/dpdk/spdk_pid58160 00:18:59.717 Removing: /var/run/dpdk/spdk_pid58261 00:18:59.717 Removing: /var/run/dpdk/spdk_pid58708 00:18:59.717 Removing: /var/run/dpdk/spdk_pid58783 00:18:59.717 Removing: /var/run/dpdk/spdk_pid58852 00:18:59.717 Removing: /var/run/dpdk/spdk_pid58872 00:18:59.717 Removing: /var/run/dpdk/spdk_pid59016 00:18:59.717 Removing: /var/run/dpdk/spdk_pid59032 00:18:59.717 Removing: /var/run/dpdk/spdk_pid59183 00:18:59.717 Removing: /var/run/dpdk/spdk_pid59199 00:18:59.717 Removing: /var/run/dpdk/spdk_pid59266 00:18:59.978 Removing: /var/run/dpdk/spdk_pid59289 00:18:59.978 Removing: /var/run/dpdk/spdk_pid59353 00:18:59.978 Removing: /var/run/dpdk/spdk_pid59371 00:18:59.978 Removing: /var/run/dpdk/spdk_pid59577 00:18:59.978 Removing: /var/run/dpdk/spdk_pid59608 00:18:59.978 Removing: /var/run/dpdk/spdk_pid59697 00:18:59.978 Removing: /var/run/dpdk/spdk_pid61051 00:18:59.978 Removing: /var/run/dpdk/spdk_pid61264 00:18:59.978 Removing: /var/run/dpdk/spdk_pid61404 00:18:59.978 Removing: /var/run/dpdk/spdk_pid62047 00:18:59.978 Removing: /var/run/dpdk/spdk_pid62259 00:18:59.978 Removing: /var/run/dpdk/spdk_pid62403 00:18:59.978 Removing: /var/run/dpdk/spdk_pid63048 00:18:59.978 Removing: /var/run/dpdk/spdk_pid63378 00:18:59.978 Removing: /var/run/dpdk/spdk_pid63518 00:18:59.978 Removing: /var/run/dpdk/spdk_pid64903 00:18:59.978 Removing: /var/run/dpdk/spdk_pid65156 00:18:59.978 Removing: /var/run/dpdk/spdk_pid65296 00:18:59.978 Removing: /var/run/dpdk/spdk_pid66682 00:18:59.978 Removing: /var/run/dpdk/spdk_pid66930 00:18:59.978 Removing: /var/run/dpdk/spdk_pid67081 00:18:59.978 Removing: /var/run/dpdk/spdk_pid68461 00:18:59.978 Removing: /var/run/dpdk/spdk_pid68902 00:18:59.978 Removing: /var/run/dpdk/spdk_pid69053 00:18:59.978 Removing: /var/run/dpdk/spdk_pid70533 00:18:59.978 Removing: /var/run/dpdk/spdk_pid70792 00:18:59.978 Removing: /var/run/dpdk/spdk_pid70942 00:18:59.978 Removing: /var/run/dpdk/spdk_pid72426 00:18:59.978 Removing: /var/run/dpdk/spdk_pid72685 00:18:59.978 Removing: /var/run/dpdk/spdk_pid72834 00:18:59.978 Removing: /var/run/dpdk/spdk_pid74307 00:18:59.978 Removing: /var/run/dpdk/spdk_pid74789 00:18:59.978 Removing: /var/run/dpdk/spdk_pid74933 00:18:59.978 Removing: /var/run/dpdk/spdk_pid75078 00:18:59.978 Removing: /var/run/dpdk/spdk_pid75486 00:18:59.978 Removing: /var/run/dpdk/spdk_pid76216 00:18:59.978 Removing: /var/run/dpdk/spdk_pid76586 00:18:59.978 Removing: /var/run/dpdk/spdk_pid77275 00:18:59.978 Removing: /var/run/dpdk/spdk_pid77710 00:18:59.978 Removing: /var/run/dpdk/spdk_pid78464 00:18:59.978 Removing: /var/run/dpdk/spdk_pid78868 00:18:59.978 Removing: /var/run/dpdk/spdk_pid80828 00:18:59.978 Removing: /var/run/dpdk/spdk_pid81263 00:18:59.978 Removing: /var/run/dpdk/spdk_pid81705 00:18:59.978 Removing: /var/run/dpdk/spdk_pid83778 00:18:59.978 Removing: /var/run/dpdk/spdk_pid84259 00:18:59.978 Removing: /var/run/dpdk/spdk_pid84765 00:18:59.978 Removing: /var/run/dpdk/spdk_pid85816 00:18:59.978 Removing: /var/run/dpdk/spdk_pid86139 00:18:59.978 Removing: /var/run/dpdk/spdk_pid87077 00:18:59.978 Removing: /var/run/dpdk/spdk_pid87402 00:18:59.978 Removing: /var/run/dpdk/spdk_pid88329 00:18:59.978 Removing: /var/run/dpdk/spdk_pid88652 00:18:59.979 Removing: /var/run/dpdk/spdk_pid89329 00:18:59.979 Removing: /var/run/dpdk/spdk_pid89609 00:18:59.979 Removing: /var/run/dpdk/spdk_pid89671 00:18:59.979 Removing: /var/run/dpdk/spdk_pid89713 00:18:59.979 Removing: /var/run/dpdk/spdk_pid89968 00:18:59.979 Removing: /var/run/dpdk/spdk_pid90141 00:19:00.239 Removing: /var/run/dpdk/spdk_pid90238 00:19:00.239 Removing: /var/run/dpdk/spdk_pid90342 00:19:00.239 Removing: /var/run/dpdk/spdk_pid90395 00:19:00.239 Removing: /var/run/dpdk/spdk_pid90415 00:19:00.239 Clean 00:19:00.239 20:14:32 -- common/autotest_common.sh@1453 -- # return 0 00:19:00.239 20:14:32 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:19:00.239 20:14:32 -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:00.239 20:14:32 -- common/autotest_common.sh@10 -- # set +x 00:19:00.239 20:14:32 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:19:00.239 20:14:32 -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:00.239 20:14:32 -- common/autotest_common.sh@10 -- # set +x 00:19:00.239 20:14:32 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:19:00.240 20:14:32 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:19:00.240 20:14:32 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:19:00.240 20:14:32 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:19:00.240 20:14:32 -- spdk/autotest.sh@398 -- # hostname 00:19:00.240 20:14:32 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:19:00.500 geninfo: WARNING: invalid characters removed from testname! 00:19:22.461 20:14:52 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:19:23.839 20:14:55 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:19:25.745 20:14:57 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:19:27.653 20:14:59 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:19:29.562 20:15:01 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:19:31.471 20:15:03 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:19:34.012 20:15:05 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:19:34.012 20:15:05 -- spdk/autorun.sh@1 -- $ timing_finish 00:19:34.012 20:15:05 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:19:34.012 20:15:05 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:19:34.012 20:15:05 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:19:34.012 20:15:05 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:19:34.012 + [[ -n 5426 ]] 00:19:34.012 + sudo kill 5426 00:19:34.021 [Pipeline] } 00:19:34.036 [Pipeline] // timeout 00:19:34.040 [Pipeline] } 00:19:34.054 [Pipeline] // stage 00:19:34.059 [Pipeline] } 00:19:34.072 [Pipeline] // catchError 00:19:34.080 [Pipeline] stage 00:19:34.082 [Pipeline] { (Stop VM) 00:19:34.106 [Pipeline] sh 00:19:34.387 + vagrant halt 00:19:36.926 ==> default: Halting domain... 00:19:43.510 [Pipeline] sh 00:19:43.791 + vagrant destroy -f 00:19:46.329 ==> default: Removing domain... 00:19:46.343 [Pipeline] sh 00:19:46.632 + mv output /var/jenkins/workspace/raid-vg-autotest/output 00:19:46.641 [Pipeline] } 00:19:46.661 [Pipeline] // stage 00:19:46.668 [Pipeline] } 00:19:46.686 [Pipeline] // dir 00:19:46.692 [Pipeline] } 00:19:46.710 [Pipeline] // wrap 00:19:46.718 [Pipeline] } 00:19:46.733 [Pipeline] // catchError 00:19:46.744 [Pipeline] stage 00:19:46.747 [Pipeline] { (Epilogue) 00:19:46.761 [Pipeline] sh 00:19:47.048 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:19:51.255 [Pipeline] catchError 00:19:51.257 [Pipeline] { 00:19:51.271 [Pipeline] sh 00:19:51.560 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:19:51.560 Artifacts sizes are good 00:19:51.569 [Pipeline] } 00:19:51.583 [Pipeline] // catchError 00:19:51.603 [Pipeline] archiveArtifacts 00:19:51.611 Archiving artifacts 00:19:51.754 [Pipeline] cleanWs 00:19:51.765 [WS-CLEANUP] Deleting project workspace... 00:19:51.765 [WS-CLEANUP] Deferred wipeout is used... 00:19:51.773 [WS-CLEANUP] done 00:19:51.774 [Pipeline] } 00:19:51.785 [Pipeline] // stage 00:19:51.789 [Pipeline] } 00:19:51.811 [Pipeline] // node 00:19:51.833 [Pipeline] End of Pipeline 00:19:51.875 Finished: SUCCESS